From: Steve Knight
Subject: Re: Virtues of Lisp syntax
Date: 
Message-ID: <1350030@otter.hpl.hp.com>
Andy (& Jeff too) point out that:
>Prolog most definitely has quoting rules; case matters in obscure ways
>and constants that begin with the wrong case have to be quoted.  The result
>isn't simpler.

OK - you can see the distinction between variables (begin with upper case)
and atoms (anything else) as a quoting scheme.  However, it certainly has
none of the complexity of Lisp's quoting schemes -- there's nothing that
corresponds to backquoting.  It hardly deserves the adjective 'obscure'.

As evidence to this, I've never seen a failure-to-quote error committed in
Prolog, though I've seen it many times in Lisp.  Is this just a UK thing?
Perhaps teaching methods in the US mean that students rarely or never 
make those errors?  I know it is a big problem for Lisp acceptance for
students in the UK.

>As to the advantages of infix/postfix/prefix syntax, I note that
>operator arity is not limited to 1 and 2.  Again, prolog uses two
>sytaxes of syntax to handle a case that is handled by one syntax in
>lisp.  (I'm referring to the opr(i,j,k,l,m) vs a <- b , c business.)

This point eludes me.  Prolog has a limited way of extending its own
syntax, it is true.  I was simply stating my view that it is able to
create a more satisfactory syntax even with those limits.

Obviously, the attractiveness of syntax is in the eye of the beholder.
I have to accept that folks like Jeff are sincere when they argue that the
syntax of Lisp is very attractive for them.  (I am even inclined to agree
when the alternative is C++.)  

My argument, which wasn't contradicted, I think, was only that you could
have the same benefits of Lisp syntax without the exact same syntax.

Steve

From: Jeff Dalton
Subject: Re: Virtues of Lisp syntax
Date: 
Message-ID: <3408@skye.ed.ac.uk>
In article <·······@otter.hpl.hp.com> ···@otter.hpl.hp.com (Steve Knight) writes:
>Andy (& Jeff too) point out that:
>>Prolog most definitely has quoting rules; case matters in obscure ways
>>and constants that begin with the wrong case have to be quoted.  The result
>>isn't simpler.
>
>OK - you can see the distinction between variables (begin with upper case)
>and atoms (anything else) as a quoting scheme.  However, it certainly has
>none of the complexity of Lisp's quoting schemes -- there's nothing that
>corresponds to backquoting.  It hardly deserves the adjective 'obscure'.

Backquote is, in my opinion, a separate issue.  Let me put it this
way: the rules for the use of QUOTE in Lisp (and for its abbreviation
the single quote) are neither obscure nor difficult to understand.
As I said in my previous message, the quoting rules for Lisp needn't
be any more confusing that those of, say, Basic.

As far as Prolog is concerned, the original claim was that Prolog
didn't have quoting rules.  I don't really want to quarrel about
whether Prolog is better or worse than Lisp in this respect or in
general.

>As evidence to this, I've never seen a failure-to-quote error committed in
>Prolog, though I've seen it many times in Lisp.  Is this just a UK thing?
>Perhaps teaching methods in the US mean that students rarely or never 
>make those errors?  I know it is a big problem for Lisp acceptance for
>students in the UK.

In my experience, in both the US and the UK, it is true that students
often make quoting mistakes in Lisp and that they often find the
quoting rules confusing.  I think there are a number of possible
contributing factors.

A common mistake in presentation, I think, is to make non-evaluation
seem too much of a special case.  For example, some people see SETQ as
confusing because it evaluates one "argument" and not another and so
they prefer to present SET first.  This doesn't work so well in Scheme
or even in Common Lisp, but it sort of worked for older Lisps.  
Unfortunately, it makes SETQ seem like a special case when it's
actually a pretty standard assignment:

          Lisp                                Basic

      (setq a 'apples)                let a$ = "apples"
      (setq b a)                      let b$ = a$
   vs (setq b 'a)                  vs let b$ = "a$"

Indeed, the whole terminology in which SETQ, COND, etc. are presented
as funny kinds of "functions" and where functions are described as
"evaluating an argument" (or not) may be a mistake.

>Obviously, the attractiveness of syntax is in the eye of the beholder.
>I have to accept that folks like Jeff are sincere when they argue that the
>syntax of Lisp is very attractive for them.  (I am even inclined to agree
>when the alternative is C++.)  

I would agree, provided we don't take this "eye of the beholder"
stuff too far.  It's true that different people will prefer different
syntaxes and that we can't say they're wrong to do so.  However, we
shouldn't go on to conclude that all views on the virtues or otherwise
of a syntax are equally valid.  Sometimes we can say, for example,
that someone hasn't learned good techniques for reading and writing
code in a particular syntax and that's why they find it so hard
to read and write.

>My argument, which wasn't contradicted, I think, was only that you could
>have the same benefits of Lisp syntax without the exact same syntax.

Actually, I did disagree with you on this point.  Perhaps I should
have said so more explicitly.  I don't think you can get the same
benefits.  You can get *some* of the benefits, but by sacrificing some
of the others.  Lisp occupies something like a local maximum in
"benefit space".

-- Jeff
From: Aaron Sloman
Subject: Re: Virtues(?) of Lisp syntax
Date: 
Message-ID: <3450@syma.sussex.ac.uk>
It's interesting to see this debate surface yet again.

Jeff Dalton writes
> In my experience, in both the US and the UK, it is true that students
> often make quoting mistakes in Lisp and that they often find the
> quoting rules confusing.  I think there are a number of possible
> contributing factors.
>
> A common mistake in presentation, I think, is to make non-evaluation
> seem too much of a special case.  For example, some people see SETQ as
> confusing because it evaluates one "argument" and not another and so
> they prefer to present SET first.
    ....
>
> Indeed, the whole terminology in which SETQ, COND, etc. are presented
> as funny kinds of "functions" and where functions are described as
> "evaluating an argument" (or not) may be a mistake.

It wasn't till I read this remark of Jeff's that I realised that one
reason I don't like Lisp is that, apart from "(" and ")", Lisp
doesn't help one to distinguish syntax words and function names.

Actually, as a sort of failed mathematician, I do appreciate the
elegance and economy of lisp syntax and could probably even like
using some versions (i.e. T or Scheme, though not Common Lisp). But
for teaching most of the kinds of students I have met I would far
rather use Pop-11 which, in many ways, is similar to, and owes a
great deal to, Lisp, though its syntax is much more redundant.
(There are other differences that are irrelevant to this
discussion.)

In teaching Pop-11 I always try to get students to think in terms of
a distinction between

(a) "syntax" words (e.g. the assignment arrow "->", parentheses,
    list brackers "[", "]", "if", "endif", "for", "endfor" "while",
    "define" etc.)
and
(b) words that represent procedures (this includes functions and
    predicates, which return results, and subroutines, which don't).

Most of its syntax forms have distinctive opening and closing
brackets, e.g. "for" ... "endfor", "define" ... "enddefine" etc.
This is verbose and inelegant but it helps learners to grasp the
sort of distinction (I think) Jeff is making.

There are exceptions, like the assignment arrow "->" which looks
just like an infix procedure name, but since students don't
naturally class it with the infix operators they already know (i.e.
the arithmetic operators) they easily accept it as a different kind
of beast, playing a special and important role in their programs.
(Actually taking something from the stack and putting it somewhere,
or running  procedure "in updater mode").

Another kind of exception that helps to muddle the distinction is
the use of special brackets for constructing lists and vectors

    [a list of words]   {a vector of words}

The brackets have both a syntactic role (delimiting expressions) and
also implicitly identify functions that return results. Again these
are very special forms that are easily learnt as special cases.
(Natural languages are full of special cases: the human brain seems
good at coping with lots of special cases.)

By contrast the uniform syntax of lisp makes it not so easy to grasp
that car, cdr, sqrt, append, etc. are different beasts from setq,
cond, quote, loop,  etc. Hence the tendency to make the kind of
mistake that Jeff describes, i.e. talking about different kinds of
functions.

It should be possible to investigate this conjecture about lisp
syntax causing more confusion empirically. I wonder if anyone has?

Moreover, the syntactic redundancy involved in using different
closing brackets for each construct in Pop-11 really does make it
much easier for many readers to take in programs, which is parly
analogous to the reasons why modern lisp programmers eschew the
syntactic economy of APL programmers and use long and redundant
procedure names in this sort of style
    (sum_of_squares_of x y)
 rather than
    (ssq x y)

Having extra syntactic redundancy also makes it easier to provide
helpful compile time checks and error messages, e.g. if an editing
mistake produces
    if isinteger(x)
    else
        foo(y)
    endif

the compiler will complain:

     MISPLACED SYNTAX WORD : FOUND else READING TO then

which can be especially helpful where sub-expressions are more
complex.

> Steve wrote
> >Obviously, the attractiveness of syntax is in the eye of the beholder.
> >I have to accept that folks like Jeff are sincere when they argue that the
> >syntax of Lisp is very attractive for them.  (I am even inclined to agree
> >when the alternative is C++.)

Jeff replied
> I would agree, provided we don't take this "eye of the beholder"
> stuff too far.  It's true that different people will prefer different
> syntaxes and that we can't say they're wrong to do so.  However, we
> shouldn't go on to conclude that all views on the virtues or otherwise
> of a syntax are equally valid.  Sometimes we can say, for example,
> that someone hasn't learned good techniques for reading and writing
> code in a particular syntax and that's why they find it so hard
> to read and write.
I agree.

It should also be possible to spell out precisely the cognitive
requirements for particular kinds of learners and users at
particular stages in their development, or for particular kinds of
programming tasks, and establish the strengths and weaknesses of
alternative languages by argument and evidence and not by simply
agreeing to differ.

E.g. the use of essentially redundant keywords like "then", "elseif"
and "else" in conditionals, and the use of distinct closing brackets
like "endif", "endwhile" that remind you what sort of construct they
are terminating, has a particularly important consequence. It
reduces short term memory load, and allows local attention
focussing, by providing local cues or reminders as to the nature of
the context, whereas without them one has to parse  a larger
expression to know that something plays the role of a condition, or
a consequent. (Some lisp constructs also use these extra keywords,
e.g. "for" ... "from" ... "to". So the fault is one of degree.)

The point about memory load is an objective fact. Whether it matters
for a particular user is going to depend on the kinds of cognitive
skills that user has already developed. It's a bit like the question
whether you should repeat the key signature at the beginning of
every line in a musical score, or simply indicate it when the key
changes. For certain experienced musicians the economical method
will suffice. For most ordinary mortals the frequent reminder is
useful, even if it does increase the clutter on the page and
probably the printing costs! Note that the time signature is not
usually repeated on every line because that is (mostly) evident from
the contents of each bar, or at least determined to within a small
range of alternatives.

Another analogy is with the difference between the musical
annotation "crescendo al fine" (= get louder from here to the end),
which requires the reader to remember the instruction from there on,
and the alternative notation which has a pair of lines getting
further and further apart, immediately above or below the stave, as
a _continual_ reminder that you have to be getting louder. For many
would be performers the first form will not be as effective as
the second.

Exactly how important all these differences in syntactic redundancy
and memory load, etc. are, and in what way, will to some extent
remain unsettled until we have good theories describing the various
kinds of cognitive processes that go on in various kinds of people
when they read, or write, or design, or debug, programs.

But most designers of programming languages don't think about human
cognitive processes.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QH, England
    EMAIL   ······@cogs.sussex.ac.uk
From: Richard A. O'Keefe
Subject: Re: Virtues(?) of Lisp syntax
Date: 
Message-ID: <3752@goanna.cs.rmit.oz.au>
In article <····@syma.sussex.ac.uk>, ······@syma.sussex.ac.uk (Aaron Sloman) writes:
> It's interesting to see this debate surface yet again.

> It wasn't till I read this remark of Jeff's that I realised that one
> reason I don't like Lisp is that, apart from "(" and ")", Lisp
> doesn't help one to distinguish syntax words and function names.

I once built a programming language which was a sort of hybrid between
Lisp and Pop.  In RILL, one wrote e.g.
	$IF (> X Y) $THEN (SETQ MAX X) $ELSE (SETQ MAX Y) $FI
Basically, I used keywords for control structures ($PROC for lambda,
$BEGIN for let, $IF, $FOR, $WHILE, and so on) and Lisp syntax for the rest.
The parser _was_ better able to notice typing mistakes, and I _did_ make
far fewer parenthesis errors than I did with straight Lisp when I later
got my hands on it.  But that was before I met Emacs.

> By contrast the uniform syntax of lisp makes it not so easy to grasp
> that car, cdr, sqrt, append, etc. are different beasts from setq,
> cond, quote, loop,  etc. Hence the tendency to make the kind of
> mistake that Jeff describes, i.e. talking about different kinds of
> functions.

The thing about Pop (as it was last time I used it) is that there is
no defined internal form for code.  At one end of the spectrum we have
"token stream" and at the other end we have "compiled code", and there
is nothing in between.  I don't know how Pop-11 handles it these days,
but in WonderPop the easiest way to write a macro was e.g.

	form for x:id in e:expr do s:stmts enddo;
	    formvars L;
	    vars L; e -> L;
	    while L.null.not do L.dest -> x -> L; s enddo
	endform

which is roughly the equivalent of

	(defmacro for (x e &rest s)
	    (let ((L (gensym)))
		`(do ((,L ,e (cdr ,L)))
		     ((null ,L))
		     (setq ,x (car ,L))
		     ,@s)))

but it worked rather differently.  When the parser found the keyword
'for' it would call the function defined by the form.  That function
would call .readid to read the identifier for x. It would then check
that the next token was "in".  It would then call .readexpr to read the
expression e. It would then check that the next token was "do".  It
would then call .readstmts do read the body s. It would then check that
the next token was "enddo".  Then it would start on the expansion.  If
any of the reads or tests failed, it would backtrack and try another
form (any number of forms could start with the same keyword).  What were
x, e, and s bound to?  *lists of tokens*.  The body of the form was
processed by making a list of tokens and pushing the lot back on the
input token stream.

I'm sure I have the names of the reading functions wrong, but that's
basically how macros worked in WonderPop, as transformations of
sequences of tokens.

It works very well.  I found Pop "forms" easy to use.

But macros aren't the only use for an internal representation.
There was no debugging interpreter, for example.  (Though you could
quite easily trace functions.)

> It should also be possible to spell out precisely the cognitive
> requirements for particular kinds of learners and users at
> particular stages in their development, or for particular kinds of
> programming tasks, and establish the strengths and weaknesses of
> alternative languages by argument and evidence and not by simply
> agreeing to differ.

Agreed!

> But most designers of programming languages don't think about human
> cognitive processes.

You should read C.J.Date's comments on SQL...

-- 
Heuer's Law:  Any feature is a bug unless it can be turned off.
From: Jeff Dalton
Subject: Re: Virtues(?) of Lisp syntax
Date: 
Message-ID: <3427@skye.ed.ac.uk>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron
Sloman) writes:

>It wasn't till I read this remark of Jeff's that I realised that one
>reason I don't like Lisp is that, apart from "(" and ")", Lisp
>doesn't help one to distinguish syntax words and function names.

Actually, Pop-11 doesn't do much along those lines either.
It's not like it uses a different font for them (cf Algol).

>Moreover, the syntactic redundancy involved in using different
>closing brackets for each construct in Pop-11 really does make it
>much easier for many readers to take in programs, 

>E.g. the use of essentially redundant keywords like "then", "elseif"
>and "else" in conditionals, and the use of distinct closing brackets
>like "endif", "endwhile" that remind you what sort of construct they
>are terminating, has a particularly important consequence. It
>reduces short term memory load, [...]

Now, there's no doubt soemthing to what you say.  However, I don't
think there's as much to it as you suppose.

One of the *mistakes* some people make when writing Lisp is to
try to add the redundancy you describe by putting certain close
parens on a line of their own followed by a comment such as
" ; end cond".  It makes the code *harder* to read, not easier.

People just starting to use Lisp, and people who use editors without a
good Lisp mode (which at least used to include the PopLog editor that
comes with Pop-11), may well find it helpful; but experienced Lisp
programmers generally do not.

Lisp procedures should be fairly short and indented so that it's easy
to see the scope of a construct: everything up to the next line not
indented more to the right.  Putting in lots of end markers makes this
harder to see, and short-term mempory doesn't have much problem
keeping track.  

Of course, it's no doubt possible to write procedures (such as ones
that are too long) where end markers might make a difference.  But
is is also possible to avoid those cases, just as it is possible to
avoid other bad practices.

Moreover, if you want to argue for the advantages of distinct closing
brackets it's not necessary to compare Pop-11 with Lisp.  How about
comparing it with a language that uses "begin" and "end" (or "{" and
"}") for everything rather than "endif" "endwhile", etc.?  I think
there are too many other differences between Pop-11 and Lisp.

-- Jeff
From: Aaron Sloman
Subject: Re: Virtues(?) of Lisp syntax
Date: 
Message-ID: <3465@syma.sussex.ac.uk>
····@aiai.ed.ac.uk (Jeff Dalton) (14 Sep 90 20:21:11 GMT)
commented on my remarks on the syntactic poverty of Lisp.

(From me:)
> >It wasn't till I read this remark of Jeff's that I realised that one
> >reason I don't like Lisp is that, apart from "(" and ")", Lisp
> >doesn't help one to distinguish syntax words and function names.
>

(From Jeff:)
> Actually, Pop-11 doesn't do much along those lines either.
> It's not like it uses a different font for them (cf Algol).

I agree that Pop doesn't make the syntax words look different in
isolation. That wasn't what I meant (though perhaps that would be a
good idea especially in printed text). Rather, I meant that in
Pop-11 the syntax words (or many of them) play an obviously
different role in syntactic constructs, e.g. having matching
brackets and associated keywords. This helps students grasp that
they have a different role from function names ie. they are
concerned with how program text is grouped into sub-structures with
(e.g.) control relations between them (evaluating THIS expression
determines whether THAT one is executed, or whether iteration
continues, etc.)

Getting a good conceptual understanding of all this takes students
some time. Using collections of related syntax words that indicate
different kinds of syntactic "fields" or "contexts" in program text,
seems to help. (I now think this is more than just an aid to short
term memory as suggested in my earlier message. But the point needs
more thought.)

> ....
(From me:)
> >E.g. the use of essentially redundant keywords like "then", "elseif"
> >and "else" in conditionals, and the use of distinct closing brackets
> >like "endif", "endwhile" that remind you what sort of construct they
> >are terminating, has a particularly important consequence. It
> >reduces short term memory load, [...]
(From jeff:)
> Now, there's no doubt soemthing to what you say.  However, I don't
> think there's as much to it as you suppose.
>
> One of the *mistakes* some people make when writing Lisp is to
> try to add the redundancy you describe by putting certain close
> parens on a line of their own followed by a comment such as
> " ; end cond".  It makes the code *harder* to read, not easier.

(He goes on to justify this by saying that indentation plus a good
editor helps, and that GOOD lisp programs have short procedure
definitions and that adding such comments makes them longer and
harder to take in.)

> Of course, it's no doubt possible to write procedures (such as ones
> that are too long) where end markers might make a difference.  But
> is is also possible to avoid those cases, just as it is possible to
> avoid other bad practices.

I agree that a good editor helps a lot (though people often have to
read code without an editor, e.g. printed examples in text books,
etc) and I agree that well-written short lisp definitions
(e.g. up to five or six lines) are often (though not always) easily
parsed by the human brain and don't need much explanatory clutter.

But I doubt that it is always desirable or possible to build good
programs out of definitions that are short enough to make the extra
contextual reminders unnecessary. It probably depends on the field
of application. E.g. I suspect that in symbolic math programs you
can get away with lots of short procedures, whereas in graphics,
vision, operating system design, compilers(??) and building complex
interactive programs like text editors and formatters, some at least
of the procedures are going to have longish stretches of nested case
analysis, several nested loops, etc.

Even Common_Lisp_The_Language (second edition) has examples that I
think are long enough to benefit from these aids that you say are
unnecessary. (Scanning more or less at random for pages with lisp
code, I found examples on pages 340-349, 667, 759, 962 and 965. Or
are these just examples of bad lisp style? (I've seen much worse!)


>
> Moreover, if you want to argue for the advantages of distinct closing
> brackets it's not necessary to compare Pop-11 with Lisp.  How about
> comparing it with a language that uses "begin" and "end" (or "{" and
> "}") for everything rather than "endif" "endwhile", etc.?

Yes, Pascal and C (especially C) are open to some of the same
objections as lisp, because they don't have sufficient disinct
opening and closing brackets, though the use of "else" is a step
in the right direction.

This is why many programmers using these languages add the kinds of
comments you disapprove of. (Some Pop-11 programmers do also.)

ML is another language which, from my observations and reports of
student difficulties, has a syntax that is too economical, though in
a very different way from Lisp. I don't know the language well, but
I think it requires the reader to depend too much on remembered
operator precedences in different contexts. This is no problem for a
computer but very hard for people. So students often have great
trouble understanding how compile-time error messages relate to the
code they have written, which "looks" obviously right to them.
Additional use of brackets might help. (Perhaps this syntactic
poverty will prevent ML ever being used widely for large scale
software engineering.)

Prolog has yet another kind of syntactic poverty, inherited from its
dependence on a logical interpretation. (E.g. textual order has very
different procedural meanings depending on context: within a rule
concatenation means something like "and then", whereas between rules
with the same predicate it means something like "and if that fails
then try...")

My general point is that human perception of complex structures is
easiest and most reliable when there is well chosen redundancy in
the structures, and most difficult and error-prone when there isn't.
However, as you point out, too much redundancy can sometimes get in
the way, and we don't yet know the trade-offs. The use of
indentation in lisp and C is an example of redundancy that is an
essential aid for humans although totally unnecessary in theory. But
it is only one type of redundancy, and is useful only on a small
scale (as you imply).

(Maybe this stuff should have been cross-posted to comp.cog-eng,
since cognitive engineering is what we are talking about.)

For the record, I should also say that I don't think there's much
difference in readability between the following:

    a(b(c(d, e), f(g, h)))      [Pop-11 and Pascal, etc]

    (a (b (c d e) (f g h)))     [Lisp]

Though I do find the latter more visually elegant.


> I think
> there are too many other differences between Pop-11 and Lisp.

Yes, there are other differences, including the differences
mentioned by Richard O'keefe to which I'll respond in another
message.

Aaron
From: Jeff Dalton
Subject: Re: Virtues(?) of Lisp syntax
Date: 
Message-ID: <3444@skye.ed.ac.uk>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron Sloman) writes:
>····@aiai.ed.ac.uk (Jeff Dalton) (14 Sep 90 20:21:11 GMT)
>commented on my remarks on the syntactic poverty of Lisp.

>> One of the *mistakes* some people make when writing Lisp is to
>> try to add the redundancy you describe by putting certain close
>> parens on a line of their own followed by a comment such as
>> " ; end cond".  It makes the code *harder* to read, not easier.
>
>(He goes on to justify this by saying that indentation plus a good
>editor helps, and that GOOD lisp programs have short procedure
>definitions and that adding such comments makes them longer and
>harder to take in.)

Humm.  After reading this message from you, and some e-mail, I've
decided that I must not have been as clear as I thought, especially
when I brought in the question of procedure length.

What happens in Pop-11 is that every "if" ends in "endif", every
"define" ends in "enddefine", and so on.  I think that doing that
(or something close to it) in Lisp is (a) unnecessary, and (b) a
mistake.  It makes the code harder to read, but not just, or even
primarily, because it makes the code longer.  The main problem is
that the code is more cluttered, the indentation obscured, and it
becomes harder to pick out more important names that appear in the
code, because there are all these other things made out of letters.

In the Lisp indentation style used in, for example, most of the
good Lisp textbooks, close parens are not emphasised (they're
just on the end of a line containing some code) and so don't
get in the way when reading.  They contribute to the overall 
"shape" of the code, but don't have to be processed individually
by the (human) reader.  If instead they're put on lines of their
own, and especially if an "end" comment is attached, then they
become more significant and it takes more work to precess them.

Moreover, to the extent that one has to read the end marker, to see
whether it's an "endif" or an "endfor", one has failed to see this
scope information in the indentation itself; so I'm not sure the
increase in redundancy is as much as one might think.  

Anyway, I don't know whether having distinct end brackets ("endif" vs
"endfor") is better than what's done in Pascal (all such groupings
use "begin" and "end" -- Pascal seems to have dropped the Algol 60
provision for comments following the "end") or C (all groupings use
"{" and "}").  I just don't think it works that well in Lisp.

But what about "long" procedures?  

One thing to note is that it isn't just a matter of the number of
characters or lines involved.  If we have a DEFUN, then a LET, and
everything else is in a CASE, and each CASE clause is fairly short,
the whole procedure can be fairly long without becoming especially
hard to read.  So it's really a question of complexity, in some
sense.

A good way to make long procedures more readable is to break them
up, visually, into manageable sections.  Blank lines and a brief
comment introducing a section are more effective, in my experience,
than emphasising end markers.

Another important technique is to make the procedures shorter by
using appropriate macros and higher-order functions (and, of course,
by putting some things into separate procedures). 

In many languages, loops are written out using a "for" statement
(or equivalent), and the programmer has to recognize certain common
patterns of code.  Lisp is often written that way too, but since
it's fairly easy to write a function or macro that embodies the
same pattern in a shorter, more easily recognized, form, the Lisp
programmer has some effective alternatives to writing out loops
in-line.  Hence function such as MAPCAR, macros that implement
"list comprehensions", and so on.

>> Of course, it's no doubt possible to write procedures (such as ones
>> that are too long) where end markers might make a difference.  But
>> is is also possible to avoid those cases, just as it is possible to
>> avoid other bad practices.

>I agree that a good editor helps a lot (though people often have to
>read code without an editor, e.g. printed examples in text books,

The editor is to help you *write* code that can be read.  The whole
point of good indentation is to make it possible to read the code
without having to pay attention to individual parentheses.  And
the code is readable whether it's in a book, a file, or whatever.

The reason people without a good Lisp editor may find end markers
helpful is that they have difficulty matching parens when they're
writing and find it difficult to check whether they've made a mistake.
And, since they're new to Lisp, they may also find that syntax easier
to read. 

But I certainly didn't mean to say (if I did say it) that a good
editor was necessary in order to read the code.  (Which is not to
say there *couldn't* be an editor that helped.)

>etc) and I agree that well-written short lisp definitions
>(e.g. up to five or six lines) are often (though not always) easily
>parsed by the human brain and don't need much explanatory clutter.

I don't know where you got the "five or six lines" from; it's
certainly not the limit I had in mind for "short".  And *of course*
such functions are not *always* easy to read; good indentation, and
other good practices, are still necessary.

There are readable Lisp procedure definitions up to a page long in
some textbooks (and of course in other places).  Once definitions
are longer than a page, they are usually significantly harder
to read in any language, although there are certain combinations
of constructs for which long definitions aren't so great a problem.
(Such the the CASE example mentioned above.)

I don't think Lisp is necessarily much worse than other languages
in this respect (if it's worse at all).

>But I doubt that it is always desirable or possible to build good
>programs out of definitions that are short enough to make the extra
>contextual reminders unnecessary. 

Given your "five or six lines", I'm not surprised to find you
think this way.

And there are contextual reminders other than end markers, such
as introductory comments, which I find more effective.

>Even Common_Lisp_The_Language (second edition) has examples that I
>think are long enough to benefit from these aids that you say are
>unnecessary. (Scanning more or less at random for pages with lisp
>code, I found examples on pages 340-349, 667, 759, 962 and 965. Or
>are these just examples of bad lisp style? (I've seen much worse!)

I have also seen code that is harder to read.  Look at "AI Practice:
Examples in Pop-11" sometime.  [I'm sorry.  I'm sure this book has
many virtues.  But some of the procedures cross a couple of page
boundaries or are, in other ways, somewhat hard to read.]

Moreover, we have to be clear on just what aids I think are
unnecessary.  I think the imitating Pop-11 end markers is not
a good idea (because there are so many of them) but that end
markers are sometimes helpful.  This was discussed in more
detail above.

>My general point is that human perception of complex structures is
>easiest and most reliable when there is well chosen redundancy in
>the structures, and most difficult and error-prone when there isn't.
>However, as you point out, too much redundancy can sometimes get in
>the way, and we don't yet know the trade-offs.

Here I agree.  Well-chosen redundancy is important.  However, it's
important to bear in mind that many programmers new to Lisp haven't
yet learned to "see" the redundancy or else chose programming styles
that obscure it.  The redundancy comes from three things:

  * Arity (eg CAR has one argument).
  * Indentation.
  * The overall shape of the parenthesis groups
    (note that this does not require that the reader pay
    much attention to individual parens)

To show what I mean by the last, here's an example.  Code
that looks like this:

	(    ((   )
	      (   ))
	     ((   )
	      (   ))
	     (
	      (   )))

is usually a COND.  (COND is perhaps the least readable Lisp
construct.)  Breaking up the paren groups tends to remove this
information.  Putting in end markers adds a different redundancy
but tends to make the indentation less effective.

Anyway, I'm not interested in a language war or in a discussion
that more properly belongs in alt.religion.computers.  I just
want to keep some space for the view that Lisp is (or can be)
readable and to suggest how some people may have been approaching
the language in less effective ways.

-- Jeff
From: Ozan Yigit
Subject: Ahem. [Re: Virtues of Lisp syntax]
Date: 
Message-ID: <15089@yunexus.YorkU.CA>
[ongoing discussion regarding lisp syntax]

In article <····@skye.ed.ac.uk> ····@aiai.UUCP (Jeff Dalton) writes:

>I would agree, provided we don't take this "eye of the beholder"
>stuff too far.

Why stop now, after we have come this far ?? ;-)

>... It's true that different people will prefer different
>syntaxes and that we can't say they're wrong to do so.  However, we
>shouldn't go on to conclude that all views on the virtues or otherwise
>of a syntax are equally valid. 

It follows therefore that one should try to substantiate some or all of the
claims regarding the effectiveness and benefits of a syntax, such as that
of lisp, instead of just presenting opinions. I have seen studies on
"natural artifical languages" (Gary Perlman's term for programming/command
languages), effects of punctuation in programming etc.  but I don't recall
seeing a study that has a definitive word on overwhelming benefits of one
syntax over another. If you know of any that substantiate various claims
[so far made] about lisp syntax, I would be very interested in it.

[I am curious: Does anyone know why Papert chose lisp-like semantics but
not the syntax for Logo?]

[and regarding the benefits of syntax]

>You can get *some* of the benefits, but by sacrificing some
>of the others.  Lisp occupies something like a local maximum in
>"benefit space".

Really. QED is it?

oz
---
The king: If there's no meaning             Usenet:    ··@nexus.yorku.ca
in it, that saves a world of trouble        ......!uunet!utai!yunexus!oz
you know, as we needn't try to find any.    Bitnet: ··@[yulibra|yuyetti]
Lewis Carroll (Alice in Wonderland)         Phonet: +1 416 7365257x33976