From: Doug Roberts
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <31670@sri-unix.SRI.COM>
In article <···@skye.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

> In article <····················@studguppy.lanl.gov> ·······@studguppy.lanl.gov (Doug Roberts) writes:
>    I think, however, that you missed one of the major reasons that the
>    Unix LISP environment is still decidedly inferior to a LISPm: the
>    majority of the market that is considering LISP as a language in which
>    to deliver applications is currently a member of either the Unix or
>    the VMS community: _they are not aware of the productivity that exists
>    on a LISPm_.
> 
> I think you have identified an important point.  However, I would
> guess that most of the people who implement Lisps for Unix (Lucid,
> Franz Inc, at al) do have a fairly good idea of what the Lisp Machines
> accomplish.  So why don't they provide the same thing on conventional
> machines?
> 
> I think it's possible to provide environments that are very similar.
> People here who use Inference ART (Automated Reasoning Tool, or
> something like that), which is built on top of Lisp, report that the
> ART environment on a Sun is very close to that on a Symbolics,
> although at the Lisp level the debugger probably isn't as good.
> 
> However, possible is not the same as easy, and I suspect the Lisp
> implementors have not had sufficient resources to let them prepare
> environmental tools as soon as they'd have liked.
> 
During my visit to Lucid a few months ago, I got the impression that
many (but not all) of the internals people there had a previous
history with LispMs. I did receive an interesting comment to my
suggestion that they should try to emulate the functionality of
Symbolics' window debugger in their lisp environment. The comment was
something like" "Groan... Just what we need... To make our product
more Symbolics-like." 

I suspect that the Unix lisp developers' priorities regarding
development environments are changing. After one of my previous
postings in which I complained about the lack of good debuggers &
inspectors, I received mail form Lucid and Ibuki, and phone calls from
Franz and Envos. Ibuki, Franz, and Envos all offered their products as
examples of "new, improved" more functional development lisp
environments. I haven't heard directly from Lucid regarding any
efforts they might have on-going, but I did read in a previous posting
from this group about a "Cadillac" environment that they are working
on?? I also don't know what Sun might be working on, if anything (Sun
of SPE? :-})..

On that note...

--Doug
	


--

===============================================================
Douglas Roberts
Los Alamos National Laboratory
Box 1663, MS F-602
Los Alamos, New Mexico 87545
(505)667-4569
····@lanl.gov
===============================================================

From: Jeff Dalton
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <469@skye.ed.ac.uk>
In article <·····@sri-unix.SRI.COM> ·······@studguppy.lanl.gov (Doug Roberts) writes:
>I did receive an interesting comment to my suggestion that they should try
>to emulate the functionality of Symbolics' window debugger in their lisp
>environment. The comment was something like" "Groan... Just what we 
>need... To make our product more Symbolics-like." 

I guess some of the Unix Lisp vendors are under commercial pressure to
make Lisp fit better with conventional languages and conventional ways
of doing things rather than make systems for Lisp hackers (such hackers
being in limited supply).  There is often also a strong negative reaction
to Lisp systems that are many megabytes in size.  To some extent this is
unfair, because calculations of the size of C systems tend to omit all
the things C gets "free" from Unix, but it is not completely unfair.

Lisp often ends up in competition with C.  People disagree about the
best way to compete (become more C-like?  less C-like?), but it has
proved difficult to convert the world to Lisp-like ways of thinking.
And so the "don't be like Symbolics" approach has its attractions.
From: John Unruh, NY9R
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <8585@ihlpf.ATT.COM>
In article <···@skye.ed.ac.uk> ····@aiai.UUCP (Jeff Dalton) writes:
>
>I guess some of the Unix Lisp vendors are under commercial pressure to
>make Lisp fit better with conventional languages and conventional ways
>of doing things rather than make systems for Lisp hackers (such hackers
>being in limited supply).  There is often also a strong negative reaction
>to Lisp systems that are many megabytes in size.  To some extent this is
>unfair, because calculations of the size of C systems tend to omit all
>the things C gets "free" from Unix, but it is not completely unfair.
>

There is one big distinction between the things C programmers get from
the UNIX (R) Operating System and what a LISP programmer gets from a
good LISP system.  Ususally the tools used for C programming reside on
the disk except for the moments when they are in use (modulo the sticky
bit).  If a tool (especially a large one) is used in a LISP image, it
increases the size of the LISP image's virtual memory.  If the LISP
implementation is good at saving memory, programmers tools will be
autoloading files of some sort, or at least not be bound into the image
in such a way that every delivery system must have them.

Many machines have limits on the maximum process size, and have problems
with really big processes.  This may be an artifact of how conventional
programming languages work.  Most C programs are fairly small, and the
environment is not integrated in the same way as a Lisp machine, so the
whole thing tends to be less memory intensive.
-- 
                               John Unruh
                               AT&T-Bell Laboratories
                               att!ihlpf!jdu
                               (312)979-6765
From: Aaron Sloman
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <1029@syma.sussex.ac.uk>
···@ihlpf.ATT.COM (John Unruh, NY9R) writes:

>
> In article <···@skye.ed.ac.uk> ····@aiai.UUCP (Jeff Dalton) writes:
    ........
> > ......There is often also a strong negative reaction
> >to Lisp systems that are many megabytes in size ....
> >
    .......
>
> Many machines have limits on the maximum process size, and have problems
> with really big processes.  This may be an artifact of how conventional
> programming languages work.  Most C programs are fairly small, and the
> environment is not integrated in the same way as a Lisp machine, so the
> whole thing tends to be less memory intensive.

Following on from my previous message on the syntactic differences
between Lisp and Pop-11, I guess I should point out that a full
Pop-11 development environment on a workstation tends to be very
much smaller than some of the well known Lisp environments.

This is partly because of the heavy use of auto-loading, which means
that Pop-11 facilities you don't use don't get linked into your
process. You can get useful work done in Poplog Pop-11 even on a 2
Mbyte machine (and that includes running a Pop-11 process that
includes an integrated Emacs-like, but smaller, editor VED), whereas
Lisp development environments tend to require far more memory.
Poplog Common Lisp doesn't require nearly so much (you can get
useful work done in 2.5 to 3 Mbytes) but it is not as heavily
optimised as Lucid. (It compensates by compiling faster.)

Like other AI language vendors we are adding tools to Poplog to allow
you to link an image containing only what your final system needs.
However, the minimal size will still be considerably more than
the minimal size of a C program.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QN, England
    INTERNET: ························@nsfnet-relay.ac.uk
              ···········································@relay.cs.net
    JANET     ······@cogs.sussex.ac.uk
    BITNET:   ························@uk.ac
        or    ······································@cunyvm.cuny.edu

    UUCP:     ...mcvax!ukc!cogs!aarons
            or ······@cogs.uucp
From: Jeff Dalton
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <486@skye.ed.ac.uk>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron Sloman) writes:

>a full Pop-11 development environment on a workstation tends to be very
>much smaller than some of the well known Lisp environments.

Since the full PopLog includes a Common Lisp, this seems to show
that Lisp environments could be smaller, not that Pop-11 requires
less than Lisp.
From: Aaron Sloman
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <1028@syma.sussex.ac.uk>
I can't resist reacting to this (sorry Jeff)

····@aiai.ed.ac.uk (Jeff Dalton) writes:

> Organization: AIAI, University of Edinburgh, Scotland
>
> Lisp often ends up in competition with C.  People disagree about the
> best way to compete (become more C-like?  less C-like?), but it has
> proved difficult to convert the world to Lisp-like ways of thinking.

There is some evidence, albeit based only on a number of cases
rather than systematic research, that it is much easier to convert
people to Lisp-like ways of thinking if you give them a Lisp-like
language (with all the benefits of incremental compilation,
integrated editor, garbage collector, pattern matcher, lists,
records, arrays, tuples, etc etc) but not a lisp-like syntax.

Instead provide a syntax that is richer, more readable, and above
all more familiar to them.

I am of course, referring to Pop-11, which is far from perfect, but
has converted many C, Fortran and Pascal users to enthusiastic AI
language users on Suns, Vaxen, HP machines, Apollos, Macs (see
review of Alphapop in Byte May 1988).

I am not criticising Lisp for any intrinsic faults - just commenting
on how its syntax makes a significant proportion of experienced
programmers react.

Try putting in front of them any lisp procedure definition involving
a few nested loops and moderately long multi branch conditionals
with 'else' clauses. Ask them to try to understand it. Then compare
their ability to understand the Pop-11 equivalent using the sort of
syntax indicated here (in a simpler example).

    define test(item, P, Q, R, S, T);
        lvars item, procedure(P, Q, R, S, T);

        if null(item) then return(false)
        elseif P(item) then return(Q(item))
        elseif R(item) then
            if S(item) then return(item)
            elseif T(item) then return(Q(item))
            else return(item)
            endif
        else return(false)
        endif
    enddefine;

Here's how you let the variables x, and y, iterate over the elements
of lists L1 and L2 in Pop-11, so that you can apply function f to
one element from each list:

    for x, y in L1, L2 do
        f(x,y)
    endfor;

This kind of syntax, with all the extra syntax words, is often more
verbose than Lisp, and parsing by machine is more complex. But the
explicit use of distinctive opening and closing brackets "endif",
"endfor" etc, and sub-syntax words like "do", "elseif" and "then"
seems to add redundancy that is better suited to HUMAN readers
(apart from those already familiar with lisp, of course), and it
also allows the compiler to help you more if you put a closing
bracket in the wrong place. You get an error message saying
something like

    MSW: MISPLACED SYNTAX WORD : FOUND endwhile READING TO endif

instead of a program that compiles and then behaves oddly.

I think the key factor is reducing the load on short term memory.
In Pop-11, like many non-AI languages, the form:

    elseif ..... then .....

makes the context clear without your having to remember or scan back
to the beginning of the conditional expression, whereas in standard
Lisp syntax

    ((.....) (.....))

could mean almost anything, depending on the context: to work out that
it is part of a conditional one has to look back to the beginning of the
expression for the keyword "cond".

More generally "(x y z)" in LISP could, depending on context, be a
procedure call, a set of formal parameters, a set of local
variables, a list of three atoms, a condition in a conditional, the
consequent of a conditional, and so on.

By contrast, POP-11 uses a large variety of keywords to distinguish
syntactic contexts, instead of relying so much on POSITION to
determine significance. Parentheses are used for procedure calls in
the normal way, but distinct brackets are used for list expressions
"[...]" and vectors "{...}" and this also helps to reduce cognitive
load during reading. (This sort of thing is particularly important
when large programs have to be maintained by programmers who were
not the original authors.)

Please note that I am not saying that Pop-11 has the perfect syntax,
only (a) I've met quite a lot of programmers who had been put off
Lisp but have found Pop-11 very attractive and (b) this is to be
expected and explained because lisp is syntactically elegant and
economical whereas Pop-11 is syntactically rich and redundant.

I conjecture that far far more programmers around the world would be
using AI tools for all kinds of software development if the AI
community had been pushing Pop-11 at least as a way of getting into
AI. Turning to Lisp after having learnt AI techniques through Pop-11
is often quite successful.

As for C - well it is popular despite having a comparatively poor
syntax (and usually deadfully unhelpful compiler error messages),
but that is partly because it is a much simpler language than either
Lisp or Pop-11, and so there is much less to learn, and much less
for the syntax to distinguish.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QN, England
    INTERNET: ························@nsfnet-relay.ac.uk
              ···········································@relay.cs.net
    JANET     ······@cogs.sussex.ac.uk
    BITNET:   ························@uk.ac
        or    ······································@cunyvm.cuny.edu
    UUCP:     ...mcvax!ukc!cogs!aarons
            or ······@cogs.uucp
From: Mayka
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <8610@ihlpf.ATT.COM>
In article <····@syma.sussex.ac.uk>, ······@syma.sussex.ac.uk (Aaron Sloman) writes:
> By contrast, POP-11 uses a large variety of keywords to distinguish
> syntactic contexts, instead of relying so much on POSITION to
> determine significance. Parentheses are used for procedure calls in
> the normal way, but distinct brackets are used for list expressions
> "[...]" and vectors "{...}" and this also helps to reduce cognitive
> load during reading. (This sort of thing is particularly important
> when large programs have to be maintained by programmers who were
> not the original authors.)

Common Lisp has the ability to handle

	1) Keyword-named functions, macros, and special forms.
		These make the function, macro, or special form look
		more like "constant" syntax that the ordinary
		programmer should not redefine.  Helps it to look
		more like syntax and less like dumb data.

	(defmacro :if (test &key then else)
		`(if ,test ,then ,else)
		)

	(:if x
	   :then y
	   :else z
	   )

	2) Keyword arguments to functions, macros, and special forms.
		Makes syntax less dependent on position, more mnemonic.
		The cost for macros and special forms is only at
		compile time.  Even for functions, keyword arguments
		need not be unduly expensive.

	3) Alternate grouping characters.  Characters such as  [ ]
		and  { }  can be redefined as synonyms for  ( ) , with
		the restriction that opening and closing character
		must match.  A convention could even be agreed upon that,
		for example,  { }  be used for lists that are clearly
		dumb data, leaving  ( )  more clearly as the function-
		calling syntax.

All this can be done within Common Lisp itself.  There need be no
confusion if such constructs are used uniformly within a given
software project.

Has anyone ever considered a plan like this - for the sake of
timid souls who are intimidated by Lisp's "native" syntax?


	Lawrence G. Mayka
	AT&T Bell Laboratories
	···@ihlpf.att.com
From: Dan Weinreb
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <370@odi.ODI.COM>
It's true that programmers familiar with C or Fortran are sometimes
scared off by Lisp's syntax, because it's so different.  But I dispute
your claims that Lisp's syntax is *actually* inferior.

Lisp programs with nested loops and complicated nested conditionals
are quite easy to read, because all serious Lisp programmers indent
their code properly and uniformly (because their editors make it easy
to do so).  Indentation is how people really understand the groupings
involved in these loops and conditionals.  Try taking your Pop-11
example, and intentionally making *one* indentation error; for example,
take the "if S(item)" line, the fourth line in the body, and indent
it right under the third.  Nobody will understand the code any more.

I agree that the Pop-11 indentation syntax you show is better than
what's currently in Common Lisp.  This situation is due to an
unfortunately holy war in the Lisp community, whose members cannot
agree on an iteration construct.  In the Symbolics superset of
Common Lisp, here's what your example would look like:

  (loop for x in l1
        for y in l2 do
    (f x y))

It has this advantage over the Pop-11 syntax you showed: each variable
is grouped together with the expression that produces the list, so
it's clearly who goes with whom.  The above example is too trivial to
demonstrate the point; it's clearer when the expressions are larger
than "l1".  (By the way, the keyword "do" will probably be removed or
made optional in the version of "loop" proposed for Common Lisp; years
of experience show that it would work better that way.)

You talk about "a program that compiles and then behaves oddly"
because of mismatches of opening and closing syntax.  This never
happens in Lisp, when used with serious editors.  It may not be
apparent to you that editors would solve this issue so well, and I
don't know how to demonstrate it; I have spent over ten years doing
this kind of programming, and I'll have to ask you to take my word for
it.  Or ask some other experienced Lisp programmers who use equivalent
editors.


      ((.....) (.....))

  could mean almost anything, depending on the context: to work out that
  it is part of a conditional one has to look back to the beginning of the

In a very long "cond" on a very small screen or window, yes, this
is true.  On the other hand, if you are looking at a big procedure
through a tiny window, it's hard to understand what's going on even
if you do know which kind of statement is which.  I agree that this
can be a problem, but in practice, "cond"s usually don't get that
big, and modern serious Lisp programmers tend to have large screens,
so it's not much of a problem.  Comments in the code also work wonders
for readability, not only making it clear that you're looking at a "cond",
but (far more important) explaining what the code really means.

    More generally "(x y z)" in LISP could, depending on context, be a
    procedure call, a set of formal parameters, a set of local
    variables, a list of three atoms, a condition in a conditional, the
    consequent of a conditional, and so on.

    By contrast, POP-11 uses a large variety of keywords to distinguish
    syntactic contexts, instead of relying so much on POSITION to

I could equally well say that "xxx yyy zzz" could mean any number of
things in Fortran or C.  In all of the languages under discussion, you
tell what's going on via keywords.  In Lisp, you know that "(cond
...)" means one thing and "(defun ...)" means another.  It's perfectly
clear, without all those different punctuation marks.

    determine significance. Parentheses are used for procedure calls in
    the normal way, but distinct brackets are used for list expressions
    "[...]" and vectors "{...}" and this also helps to reduce cognitive
    load during reading. (This sort of thing is particularly important
    when large programs have to be maintained by programmers who were
    not the original authors.)

Ah, now you're talking about distinguishing between expressions and
textual constants.  One of the few punctuation marks that Lisp *does*
use is the single-quote, which makes it clear whether something is
a constant or not.  Even in a very big constant, where the single-quote
is far away, it is *immediately* apparent whether you are looking
at code or not, since code can easily be recognized by the keywords
and indentation.  The only exception, of course, is when the
constant *really is* a piece of list structure that works as Lisp
code, but of course such constants look like code.  Again, I have
to ask you to believe me that in years of maintaining other people's
programs, I hardly ever had any trouble distinguishing constants from code.

I don't know enough about Pop-11 to comment on its syntax, but a good
way to see the problems with one kind of conventional syntax is to
look at C.  (Yes, I saw that you don't think highly of C's syntax
either.)  There's a great book called "C Traps and Pitfalls" (or
something very close to that) that explains a lot of ways that you can
get fouled up by C's syntax.  My favorite: how do you write a C macro
that expands into a statement?  It's very hard to get this right.  The
tricky issue is whether the macro should include the semicolon or not,
and how this interacts with the "if" and compound statement syntax in C.
The book shows a way that works, but it's really an obscure kludge.
This is the kind of problem Lisp would never have, because its lexical
syntax is so simple and uniform.

It's true that many people who are familiar with other languages
are scared off by Lisp syntax, but it's too bad, because their
fears are not justified by the facts.

Daniel Weinreb     Object Design, Inc.     ···@odi.com
From: Jeff Dalton
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <487@skye.ed.ac.uk>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron Sloman) writes:
>There is some evidence, albeit based only on a number of cases
>rather than systematic research, that it is much easier to convert
>people to Lisp-like ways of thinking if you give them a Lisp-like
>language (with all the benefits of incremental compilation,
>integrated editor, garbage collector, pattern matcher, lists,
>records, arrays, tuples, etc etc) but not a lisp-like syntax.
>
>Instead provide a syntax that is richer, more readable, and above
>all more familiar to them.
>
>I am of course, referring to Pop-11

Over the years, quite a few preprocessors have been written.
These preprocessors convert some more conventional notation
to Lisp.  None of them has caught on in the Lisp community.
So I guess that shows there are some people who actually
prefer the Lisp syntax.

Of course, Pop-11 isn't a preprocessor.  But it's still true,
I think, that syntax is a matter of taste.

>Here's how you let the variables x, and y, iterate over the elements
>of lists L1 and L2 in Pop-11, so that you can apply function f to
>one element from each list:
>
>    for x, y in L1, L2 do
>        f(x,y)
>    endfor;

Lisp macros are an alternative to preprocessors that can work well
in some cases.  For example, the loop above could be written

     (loop for x in L1 and y in L2
           fo (f x y))

-- Jeff
From: John Nagle
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <11917@well.UUCP>
     Syntax is an big issue for beginning programmers, but assumes much
less importance once you know a few different languages.  Semantics and
paradigm are much bigger issues when writing sizable programs.  As
programs become larger, syntactical issues retreat in importance and
issues such as namespace control and other modularity issues dominate.

     Most of the attempts to make LISP look like Pascal or one of its
descendants result in a syntax that is more, rather than less, painful.
On the other hand, the fact that data and programs have the same 
representation in LISP really doesn't seem to be used all that much
any more.  It was felt to be terribly important at one time, but today,
it just doesn't seem to be a big issue.

					John Nagle
From: Mike Thome
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <40732@bbn.COM>
In article <·····@well.UUCP> ·····@well.UUCP (John Nagle) writes:
>     Most of the attempts to make LISP look like Pascal or one of its
>descendants result in a syntax that is more, rather than less, painful.
For example, take a look at Logo...  When I was learning lisp, at first
it was its syntactic consistancy which was probably the most attractive
attribute.

>On the other hand, the fact that data and programs have the same 
>representation in LISP really doesn't seem to be used all that much
>any more.  It was felt to be terribly important at one time, but today,
>it just doesn't seem to be a big issue.
>					John Nagle
I must disagree here - any macro (including most setf expanders) will
depend on the similarity between data and program. Personally, I've been
working on a number of different projects which build (sometimes compile)
routines at runtime.  While this might even be unusual, lisp is, after
all, one of the very few (only mainstream?) language which can do this -
I have a hard time imagining what it would be like to try to implement
one of these systems in a "conventional" language.

Mike Thome (······@bbn.com)
From: Mike Shaff
Subject: Re: Unix Lisp Environments (Really Lisp Syntax)
Date: 
Message-ID: <9636@polya.Stanford.EDU>
ciao,

While the number of people that do *NOT* use Lisp's ability to operate on
programs as data may have increased, I think this has more to do with an
increase in the number of people that use Lisp than a lack of interest in the
capability.  For people who need to write code that performs analysis of one
form or another, simulates, etc a program the "code as data" quality of Lisp is
critical to their effort.  On a side comment, I think that in the future we
will see *MORE* of this type of usage as people need & create more embedded
language systems (e.g., AutoLisp).

(peace chance)

	mas
-- 
From: Aaron Sloman
Subject: Code as data (Syntax, macros, run-time compilation)
Date: 
Message-ID: <1044@syma.sussex.ac.uk>
My original posting on the unfortunate effects of Lisp syntax produced a
number of comments and stirred up a discussion of code as data. This
message attempts to separate what is important about this in lisp from
what is superficial. In particular I try to show that the syntax of
lisp has nothing to do with the possibility of having macros and that
not all cases where code has to be treated as data require the original
source code to be manipulated.


·····@well.UUCP (John Nagle) writes:

>      Syntax is an big issue for beginning programmers, but assumes much
> less importance once you know a few different languages.
My main point in raising this issue was that the syntax of Lisp has had
the effect of turning off a number of experienced programmers who might
otherwise have been tempted to get into AI tools and techniques.
This is consistent with the view that once you have got into Lisp
it is a fine language.

> ... Semantics and
> paradigm are much bigger issues when writing sizable programs.  As
> programs become larger, syntactical issues retreat in importance and
> issues such as namespace control and other modularity issues dominate.
Agreed - but syntax and in particular readability remain relevant when
someone has to maintain code written by others.

>      Most of the attempts to make LISP look like Pascal or one of its
> descendants result in a syntax that is more, rather than less, painful.

Painful for whom? Has this been investigated systematically? I can
well imagine that someone brought up on and/or very used to Lisp will
find the extra syntactic clutter objectionable whereas for others it
might make the language far more approachable (as Pop-11 has proved
to be for many experienced programmers.)

Have any tests been done on programmer productivity, reliability,
ease of maintenance over a longish period, using standard Lisp and
Lisp with a Pascal-like syntax? (I fear such tests would be very
expensive if done with sufficient thoroughness to give conclusive
evidence.)

·····@Sesame.Stanford.EDU (Mike Shaff) writes:

> For people who need to write code that performs analysis of one
> form or another, simulates, etc a program the "code as data"
> quality of Lisp is critical to their effort.

I suspect that what is important is that there is a well defined
parse-tree or similar structure that can be derived from the source
code and manipulated by program. Whether the external form makes the
parsing totally trivial (because it is all based on parentheses) or
not is probably far less important. (But see comments below on
manipulating source code vs manipulating structured procedure
objects).

······@bbn.com (Mike Thome) writes:

> In article <·····@well.UUCP> ·····@well.UUCP (John Nagle) writes:
> >     Most of the attempts to make LISP look like Pascal or one of its
> >descendants result in a syntax that is more, rather than less, painful.
> For example, take a look at Logo...

I am amazed that anyone regards Logo as an attempt to make Lisp look
like Pascal! One of the flaws in the design of at least early
versions of Logo (I've not looked at current versions) appears to
have been the assumption that giving people fewer syntax forms to
learn and fewer punctuation symbols (no parentheses, commas,
semi-colons etc) makes the language easier to learn. As if the
problems of a learner have to do primarily with how much typing your
fingers have to do, as opposed to how much parsing your brain has to
do. We seriously considered Logo as a teaching language for AI in the
mid 70s and rejected it in favour of Pop. Had Pop not been available
we'd have preferred Lisp to Logo.

> .... When I was learning lisp, at first
> it was its syntactic consistancy which was probably the most attractive
> attribute.
Yes - there are many people who react like that, especially people
with mathematical aptitude and training. However, there are also
many (including some experienced programmers) who are put off by the
cognitive effort that is required because of the syntactic economy.

>
> >On the other hand, the fact that data and programs have the same
> >representation in LISP really doesn't seem to be used all that much
> >any more.  It was felt to be terribly important at one time, but today,
> >it just doesn't seem to be a big issue.
> >					John Nagle
> I must disagree here - any macro (including most setf expanders) will
> depend on the similarity between data and program......

However, this does not require something like Lisp syntax. E.g.
Pop-11 (following Pop-2) allows macros. The input stream is a
dynamic list of text items (words, strings, numbers etc) and a macro
is just the name of a procedure that reads in a portion of that stream
changes it and then puts back the result onto the front of the list.
If it contains no macros that is what will be compiled in place
of the origina,. Lisp syntax makes the reading and rearranging easier.
Pop syntax gives more freedom for syntactic extensions - e.g.
expressions and statements don't have to begin and end with parentheses.

So "similarity between data and program" as in Lisp is not necessary
for macros.

>.......Personally, I've been
> working on a number of different projects which build (sometimes compile)
> routines at runtime.  While this might even be unusual, lisp is, after
> all, one of the very few (only mainstream?) language which can do this -
> I have a hard time imagining what it would be like to try to implement
> one of these systems in a "conventional" language.

This is a standard facility in Pop-11. In fact, Pop-11 procedures
are provided for planting instructions to the Poplog virtual
machine, which can then be compiled. This enables Poplog Pop-11
to be extended with languages like Common Lisp, Prolog, Scheme and ML
without requiring them to have anything remotely like the syntax of
Pop-11, and without requiring them to be interpreted: they all compile
to machine code.

········@hi3.ACA.MCC.COM (Mark Rosenstein) writes (in response
to John Nagle):

> Ummmm. The only datapoint I have is my own work, and work in the 5
> or 6 projects here at work that I know about. I would say all of them
> use the fact that data and program have the same representation. I
> would be so bold as to exagerate that the only interesting lisp programs
> are ones that generate and manipulate structure as programs. [Then compile
> 'em and use 'em, of course]. Of course elsewhere it may be different.
> I find this especially true in object oriented systems like CLOS and
> flavors, where a lot of what happens is that the program is happily
> creating new classes and methods for those classes and then using
> them.

In Pop-11 a lot of program manipulation makes use of the fact that
you can "partially apply" a procedure to some data (including other
procedures) to create a new procedure. Partial application creates
structures that can either be RUN as procedures (possibly requiring
extra arguments) or analysed as data (you can examine or alter the
procedure part, or the data-part). These entities tend to run much
faster than interpreted code, but to have essentially the structure you
require for analysing a procedure at a level of abstraction that ignores
the fact that you have built the procedure out of a particular piece
of code.

E.g. -applist- is a procedure that takes a list L and a procedure
P and applies P to every element of L. If -proc- is a procedure, then
applist(%proc%) is a new procedure formed by partially applying
-applist- to -proc-, which can be applied to a list, in which case it
will apply -proc- to every element. You can either RUN this new procedure
or treat it as structured data and access the components (applist and
proc).

The Lisp way of doing things makes you think in terms of the actual code
rather than in terms of the composition of procedures and data, which, I
suspect, is what is mostly required. However, if you really must work at
the level of code, Pop-11 allows programs to build lists of code, then
compile and run them. So you get both options, though manipulating Pop
code by program is harder than manipulating Lisp code.

There are some examples illustrating all these points in our book
R. Barrett, A. Ramsay and A. Sloman
    POP-11: A Practical Language for AI,
    Ellis Horwood and John Wiley, 1985, reprinted 1986.

although Pop-11 has evolved since this was written. It now includes
full lexical scoping and other extensions.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QN, England
    INTERNET: ························@nsfnet-relay.ac.uk
From: Jeff Dalton
Subject: Re: Code as data (Syntax, macros, run-time compilation)
Date: 
Message-ID: <521@skye.ed.ac.uk>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron Sloman) writes:
>My original posting on the unfortunate effects of Lisp syntax produced a
>number of comments and stirred up a discussion of code as data. This
>message attempts to separate what is important about this in lisp from
>what is superficial.

I think you will find that people disagree about what is important.
You may well say that all of the important aspects are shared by
Pop-11, but that might not be the case.  Perhaps it's important
that Lisp's syntax is simple and corresponds directly to lists.

I think Lisp makes it easier (than does Pop) to see the true
nature of macros.  In my, admittedly limited, experience, Pop-
trained programmers tend to have a faulty understanding of macros.

>In particular I try to show that the syntax of
>lisp has nothing to do with the possibility of having macros

Of course macros are possible with a different syntax.  C has macros.
Assembler has macros.  So it's trivial to show this, and not very
enlightening.

>and that not all cases where code has to be treated as data require
>the original source code to be manipulated.

So?  It's seldom the case that all cases require anything.

>My main point in raising this issue was that the syntax of Lisp has had
>the effect of turning off a number of experienced programmers who might
>otherwise have been tempted to get into AI tools and techniques.

And the complicated syntax of other languages has turned off a
number of people who were pleased to discover that Lisp was different.
So there is anecdotal evidence on both sides.

>> ... Semantics and
>> paradigm are much bigger issues when writing sizable programs.  As
>> programs become larger, syntactical issues retreat in importance and
>> issues such as namespace control and other modularity issues dominate.

>Agreed - but syntax and in particular readability remain relevant when
>someone has to maintain code written by others.

I suppose someone has to say it.

   Semantics and paradigm are much bigger issues when writing or
   maintaining sizable programs.  As programs become larger, ...

Do I have to continue?

>>      Most of the attempts to make LISP look like Pascal or one of its
>> descendants result in a syntax that is more, rather than less, painful.

>Painful for whom? Has this been investigated systematically? I can
>well imagine that someone brought up on and/or very used to Lisp will
>find the extra syntactic clutter objectionable whereas for others it
>might make the language far more approachable (as Pop-11 has proved
>to be for many experienced programmers.)

If neither side has performed a systematic investigation, and
we're just going to say "I can well imagine" back and forth,
what's the point?

But if you want to play this game, it's worth asking why some people
prefer Lisp's syntax.  Remember that almost all of us became used to
conventional mathematical notation, and even conventional programming
languages, first.  Most people see 3+5, and "if ..., then ..." long
before they see (+ 3 5) or COND, so I think your theory about what
people are used to has little to recommend it.  And this brings us to:

>> .... When I was learning lisp, at first it was its syntactic
>> consistency which was probably the most attractive attribute.

>Yes - there are many people who react like that, especially people
>with mathematical aptitude and training. However, there are also
>many (including some experienced programmers) who are put off by the
>cognitive effort that is required because of the syntactic economy.

Now, it's interesting that you're supposing greater cognitive
effort is required when reading Lisp, since many people have
found that just the opposite is true.

You also seem to be ignoring the points made about indentation.
When reading well-formatted Lisp, it is almost never necessary
to read the parentheses; and if functions are short (as they
most often should be), it's easy to keep one's place without
syntactic helpers like "then" and "endif".  Indeed, I find code
harder to read when it's full of "endif" and the like.

>·····@Sesame.Stanford.EDU (Mike Shaff) writes:
>> For people who need to write code that performs analysis of one
>> form or another, simulates, etc a program the "code as data"
>> quality of Lisp is critical to their effort.

>I suspect that what is important is that there is a well defined
>parse-tree or similar structure that can be derived from the source
>code and manipulated by program. Whether the external form makes the
>parsing totally trivial (because it is all based on parentheses) or
>not is probably far less important.

The point isn't just that Lisp's syntax is simple.  It's also
significant that there's a direct correspondence between code and
lists.  There's no need to learn about an additional intermediate
structure.


>······@bbn.com (Mike Thome) writes:
>
>> In article <·····@well.UUCP> ·····@well.UUCP (John Nagle) writes:
>> >On the other hand, the fact that data and programs have the same
>> >representation in LISP really doesn't seem to be used all that much
>> >any more.  It was felt to be terribly important at one time, but today,
>> >it just doesn't seem to be a big issue.
>>
>> I must disagree here - any macro (including most setf expanders) will
>> depend on the similarity between data and program......

>However, this does not require something like Lisp syntax.

Sure it doesn't _require_ it.  But so what?

>Pop-11 (following Pop-2) allows macros. The input stream is a
>dynamic list of text items (words, strings, numbers etc) and a macro
>is just the name of a procedure that reads in a portion of that stream
>changes it and then puts back the result onto the front of the list.

Sounds like a *big* pain to me.

>So "similarity between data and program" as in Lisp is not necessary
>for macros.

Of course it's not _necessary_.  So?

>>.......Personally, I've been
>> working on a number of different projects which build (sometimes compile)
>> routines at runtime.
>
>This is a standard facility in Pop-11. In fact, Pop-11 procedures
>are provided for planting instructions to the Poplog virtual
>machine, which can then be compiled. This enables Poplog Pop-11
>to be extended with languages like Common Lisp, Prolog, Scheme and ML
>without requiring them to have anything remotely like the syntax of
>Pop-11, and without requiring them to be interpreted: they all compile
>to machine code.

How often does the virtual machine have to be changed in order
to accommodate a new language?  Besides, the virtual machine isn't
part of Pop-11 -- it's part of the PopLog system, which, after all,
is just one possible implementation of Pop-11.  So your point about
the virtual machine is a point in favor of virtual machines, perhaps,
by hardly a point in favor of (or against) Pop-11.

And, of course, languages with a syntax unlike that of Lisp can
be implemented by translating them into Lisp.  They needn't be
interpreted, nor follow Lisp syntax.

>In Pop-11 a lot of program manipulation makes use of the fact that
>you can "partially apply" a procedure to some data (including other
>procedures) to create a new procedure.

In Lisp, we tend to call them closures.  

Some things, like Pop's "open stack", make Pop's partial applications
somewhat different from what's offered by modern Lisps.  But the
ability to use function rather than just their source code (so to
speak) is there in both languages.  And many people prefer the
lexical approach used in modern Lisps.

Pop does have some advantages, and does have some uniformity that
Common Lisp lacks.  (For example, Pop's equivalent of hash tables can
be used as if they were functions.)  ML has some advantages as well.
It should also be said that Pop treated functions a first-class
objects before this notion achieved it's current popularity in the
Lisp community.  The criticism you make in the next paragraph
used to make more sense than it does today.  (Though, to be fair,
even Lisp 1.5 funargs could be used as functions of more or less
the right sort.)

>The Lisp way of doing things makes you think in terms of the actual code
>rather than in terms of the composition of procedures and data, which, I
>suspect, is what is mostly required.

Actually, Lisp doesn't _make_ you think that way at all.
From: Aaron Sloman
Subject: Re: Code as data (replies to comments).
Date: 
Message-ID: <1057@syma.sussex.ac.uk>
Well, I am sorry I raised so many hackles. The main thing to
remember is that I am not trying to knock Lisp, but rather that I
feel that AI languages like Lisp in AI development environments have
potential to be of tremendous benefit to all sorts of programmers
outside the lisp community, programmers that, so far, have not shown
much interest.

My conjecture (based on quite a number of years of watching learners
of various kinds) that syntax was, for many programmers, the main
hurdle was perhaps a way of clutching at a straw: let's agree to
offer non-lispers something with a more familiar and (at least
initially) more readable syntax (not necessarily Pop-11 -- I merely
use it for illustrative purposes, as an example, not as an ideal),
with a good set of development tools, and then maybe lots more of
those programmers will begin to appreciate the benefits we all(??)
agree AI languages and develoment environments can provide.

Well that was the hope.

But perhaps I was just being naive. Perhaps not until C has been
extended with garbage collector, built in list-processing
facilities, nested procedures with full lexical scoping, incremental
compiler(??), universal types with run-time type-checking,
integrated editor, and many other goodies, will all those people
appreciate what they have been missing?

····@aiai.ed.ac.uk (Jeff Dalton) writes:

> I think Lisp makes it easier (than does Pop) to see the true
> nature of macros.
Well if that is one way of putting the point that lisp makes it
easier to write macros I agreed in advance. If it means that Lisp
makes it easier to see the true nature of Lisp macros, then I guess
that's indisputable. But as for any other interpretation, I am
lost as to what "the true nature" could mean.

> In my, admittedly limited, experience, Pop-
> trained programmers tend to have a faulty understanding of macros.
I can well believe that they misunderstand how Lisp (and C) macros work
(at least initially). (And vice versa). Allan Ramsay implemented a
collection of pop-11 macro packages called "forms" that may come
closer to your notion of a macro. (See HELP FORM in Poplog).


> And the complicated syntax of other languages has turned off a
> number of people who were pleased to discover that Lisp was different.
> So there is anecdotal evidence on both sides.
The issue originally was that there is more than anecdotal evidence
that the take up of Lisp as a general purpose programming language
has been disappointing - the premiss with which someone else started
this discussion. I did not think the diagnoses offered by others
(e.g. size of lisp systems) told the whole story.

> >> ... Semantics and
> >> paradigm are much bigger issues when writing sizable programs.  As
> >> programs become larger, syntactical issues retreat in importance and
> >> issues such as namespace control and other modularity issues dominate.
>
> >Agreed - but syntax and in particular readability remain relevant when
> >someone has to maintain code written by others.
>
> I suppose someone has to say it.
>
>    Semantics and paradigm are much bigger issues when writing or
>    maintaining sizable programs.  As programs become larger, ...
>
> Do I have to continue?

I only said "remain relevant". What have I done wrong?

At first I agreed with your comment. But now it strikes me that no
matter how good the "semantics and paradigm" are, you can make the
language less useful than an intrinsically inferior one by choosing
a dreadful syntax. So it is possible for the bigger issue to be
swamped by the smaller one. (Different dimensions of bigness?) (But
I am not claiming this has happened in Lisp, though there are no
doubt some adherents of languages like ML, or even Pascal, that
would be very critical of the semantics of Lisp and Pop.)

> If neither side has performed a systematic investigation, and
> we're just going to say "I can well imagine" back and forth,
> what's the point?

The only point is that I was offering (perhaps naively?) a route for
people to TRY to get lots more programmers interested in Lisp. I am
not claiming that it is guaranteed to succeed. But I would have
thought that people who really cared might want to try, as long as
the evidence that it will fail is only anecdotal. Maybe (some) Lisp
users don't really care about all those people not using AI
languages.

(If we are to believe some of the prophets there soon won't
be any need for writing programs any way, so why bother.....)

> ...Remember that almost all of us became used to
> conventional mathematical notation, and even conventional programming
> languages, first.  Most people see 3+5, and "if ..., then ..." long
> before they see (+ 3 5) or COND, so I think your theory about what
> people are used to has little to recommend it.

I did not say that nobody could be converted to lisp after learning
a different syntax. That would have been self defeating for my
suggestion that learning lisp-like programming techniques in a
different syntax would then make it easier to get into lisp. (That's
the route our students follow.)

> Now, it's interesting that you're supposing greater cognitive
> effort is required when reading Lisp, since many people have
> found that just the opposite is true.

I was assuming everyone agreed that syntactic redundancy (within
reasonable bounds) reduces cognitive effort. Indentation is used by
lisp programmers in an attempt to increase redundancy for that very
reason. But I previously pointed to local ambiguities that
indentation alone does not resolve. (E.g. indentation as such does
not tell you, as a keyword does, that some local variables are being
declared.)

In some longish lisp programs with multi-branch conditionals (e.g.
p.323 of Abelson and Sussman), the fact that all the conditions use
a predicate name ending in "?" is an example of useful syntactic
redundancy of the kind I am saying is mostly lacking in lisp.

I suppose it is debatable whether having redundancy supplied by
conventional devices like this rather than being defined into the
language is good enough.

Anynow, if my premiss about syntactic redundancy (within bounds)
reducing cognitive load is disputed then we have to call in the
cognitive scientists who do experiments on this kind of thing.
Unfortunately the only systematic experiments I know of (by Max Sime et
al at Sheffield) only involved novices, and over a short time.

Incidentally I fear that lack of syntactic redundancy and the
cognitive load in having to memorise operator precedences, will also
reduce the take up of ML among programmers in general, though it
may be compensated for by the extra checking a compiler can do,
compared with lisp (or Pop).

>
> You also seem to be ignoring the points made about indentation.

Not ignoring them. I think indentation helps (even in richer
languages). But the point about indentation ignored my point about
needing to use extended context to disambiguate. Indentation gives
information about relative scope - but you still have to look upward
to see that "((" means "elseif" rather than something else. And as
you say, short procedures help.

> When reading well-formatted Lisp, it is almost never necessary
> to read the parentheses;
Has anyone tried using a lisp using only indentation and no
parentheses of the kind that you don't need to read? It would be an
interesting experiment, though I fear the result might be Logo.

> .....and if functions are short (as they
> most often should be), it's easy to keep one's place without
> syntactic helpers like "then" and "endif".

I agree that short lisp function definitions are easy to read.

I guess the lisp code I have found hard to read was not written
by well-trained lispers (even though some of them wrote text
books)

> >·····@Sesame.Stanford.EDU (Mike Shaff) writes:
> >> For people who need to write code that performs analysis of one
> >> form or another, simulates, etc a program the "code as data"
> >> quality of Lisp is critical to their effort.
                        ^^^^^^^^ ( emphasis added by A.S.)
> >I suspect that what is important is that there is a well defined
> >parse-tree or similar structure that can be derived from the source
> >code and manipulated by program. Whether the external form makes the
> >parsing totally trivial (because it is all based on parentheses) or
> >not is probably far less important.
>
> The point isn't just that Lisp's syntax is simple.  It's also
> significant that there's a direct correspondence between code and
> lists.  There's no need to learn about an additional intermediate
> structure.
Yes that may be significant, in the sense that it simplifies some of
the tasks. I was simply trying to argue that it wasn't "critical",
for the reasons given. Maybe "critical" has a stronger meaning for
me than for you.

> >> I must disagree here - any macro (including most setf expanders) will
> >> depend on the similarity between data and program......
>
> >However, this does not require something like Lisp syntax.
>
> Sure it doesn't _require_ it.  But so what?
Perhaps I also misinterpreted the use of "depend on", like "critical"?

> >Pop-11 (following Pop-2) allows macros. The input stream is a
> >dynamic list of text items (words, strings, numbers etc) and a macro
> >is just the name of a procedure that reads in a portion of that stream
> >changes it and then puts back the result onto the front of the list.
>
> Sounds like a *big* pain to me.
Yes it can be, especially if you don't have tools to read in expressions,
expression sequences, etc. I said macros were easier in Lisp.

> >So "similarity between data and program" as in Lisp is not necessary
> >for macros.
>
> Of course it's not _necessary_.  So?
The comments I responded to appeared to be saying it was necessary.

> How often does the (Poplog) virtual machine have to be changed in order
> to accommodate a new language?

It needed a major change (some years ago) to speed up prolog. It
needed a minor change to cope with the fact that Lisp doesn't have a
boolean type separate from lists. It needed an extension to
implement C-like pointer manipulation in the "system" dialect of
Pop-11. It needed major changes for some of the extensions to Pop-11
(e.g. dynamic local expressions).

> ......So your point about
> the virtual machine is a point in favor of virtual machines, perhaps,
> by hardly a point in favor of (or against) Pop-11.
I am sorry to give so much offense without intending it. I was merely
responding to the apparent claim that there was ONLY the lisp way
of doing certain things (i.e. using lisp macros to provide language
extensions.)

> And, of course, languages with a syntax unlike that of Lisp can
> be implemented by translating them into Lisp.  They needn't be
> interpreted, nor follow Lisp syntax.
True. Though you can use macros only to implement things that
can be translated into lisp, just as in Pop you can use macros
only to implement things that can be translated into Pop. It is
occasionally useful to have a more general facility (though that
is stricly irrelevant to our main topic).

> >In Pop-11 a lot of program manipulation makes use of the fact that
> >you can "partially apply" a procedure to some data (including other
> >procedures) to create a new procedure.
>
> In Lisp, we tend to call them closures.
Also in Pop, but I deliberately did not use the word in case anyone
confused partially applied procedures with lexical closures, which
though useful, are, I believe less well structured (e.g. since a
lexical closure can bind any non-local variable in a procedure).

> ......And many people prefer the
> lexical approach used in modern Lisps.
Yes, it is used in Pop-11 as well, though not, I'd have thought,
for the purposes that I took this comment to be referring to:

> >·····@Sesame.Stanford.EDU (Mike Shaff) writes:
> >> For people who need to write code that performs analysis of one
> >> form or another, simulates, etc a program the "code as data"
> >> quality of Lisp is critical to their effort.

Would lexical closures be used for performing analysis, simulating
etc?

> >The Lisp way of doing things makes you think in terms of the actual code
> >rather than in terms of the composition of procedures and data, which, I
> >suspect, is what is mostly required.
>
> Actually, Lisp doesn't _make_ you think that way at all.

I suspect from your comments on funargs and lexical closures that we
interpreted Mike Shaff differently. E.g. I think you took him to be
talking about the kind of procedural abstraction using closures that
Abelson and Sussman discuss, whereas I took him to be talking about
things that reason about programs where the similarity of code and
data in lisp would strongly incline you to think in terms of
manipulating source code. Yes - "make" was too strong!


···@odi.com (Dan Weinreb) writes:

> ....CGOL apparently never
> caught on with any user community.  Obviously, that fact by itself
> proves nothing about its worthiness, value, etc.  I just wanted to
> make the point that the idea has been explored in the past, at least
> two times, and therefore probably more times than that.  Just thought
> you might be interested.

Yes

········@bbn.com (Albert Boulanger) writes:

> We still use this [CGOL] at BBN. In fact the infix reader for Lisp on the
> Symbolics uses a reimplementation of CGOL. (This can be extended
> somewhat.) We use the Symbolics reimplementation for reading infix
> expressions for a rule structure-editor based on presentations. I
> found that writing CGOL "productions" much more natural than dealing
> with YACC. I think a key element in CGOL for complex parsing is its
> notion of sublanguages (which the Symbolics implementation omits).

Sounds interesting. Does anyone know if any attempt was ever made to
get CGOL taken up by people outside the Lisp community, as opposed
to just trying to get Lispers to change to it?

I suspect it may now be too late for anything like this, given that
so much effort has already been put into simple-syntax lisp, and all
those text-books already exist. The last chance might have been
the year before the Common Lisp systems started coming out.

But perhaps someone with enough initiative and resources could use
the recent publication of the book on Computational Linguistics by
Gazdar and Mellish in three versions (Pop-11, Prolog and Lisp) as an
opportunity to investigate how people who know none of these
languages react to the different versions.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QN, England

    ························@nsfnet-relay.ac.uk
From: Jeff Dalton
Subject: Re: Code as data (replies to comments).
Date: 
Message-ID: <525@skye.ed.ac.uk>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron Sloman) writes:
>Well, I am sorry I raised so many hackles.

Why do you think you've raised hackles?  You posted an article, some
people replied.  Seems perfectly normal to me, except that there are
so few flames.  (You should try some of the other newsgroups.)

>The main thing to remember is that I am not trying to knock Lisp, but
>rather that I feel that AI languages like Lisp in AI development
>environments have potential to be of tremendous benefit to all sorts
>of programmers outside the lisp community, programmers that, so far,
>have not shown much interest.

It's certainly true that many programmers haven't shown much
interest.  But it's less clear that Lisp's syntax is the main
reason.

Maybe you're right, but how would we know?  To those who dislike
Lisp syntax, it will sound plausible that the syntax is what puts
people off, but not everyone dislikes Lisp syntax (just as not
everyone dislikes vanilla ice cream).

I find Pop harder to read than Lisp.  In part, that's because I'm used
to Lisp.  But my preference for simple syntax is not because I'm used
to Lisp -- it's one of the reasons I became interested in Lisp in the
first place.  And I'm not used only to Lisp, I'm used to several other
languages as well.

I don't think that, in the end, readibility is something about
which everyone will agree.  Still, none of this says there aren't
some people who'd prefer Pop's syntax.  So you're probably right
that some people might take up an "AI language" if it had a syntax
other than Lisp's.

>····@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>> I think Lisp makes it easier (than does Pop) to see the true
>> nature of macros.
>Well if that is one way of putting the point that lisp makes it
>easier to write macros I agreed in advance. If it means that Lisp
>makes it easier to see the true nature of Lisp macros, then I guess
>that's indisputable. But as for any other interpretation, I am
>lost as to what "the true nature" could mean.

Note that I didn't say "nature of Lisp macros".  I didn't want to go
into details in a message that was already too long, but what I had in
mind as the "nature" was that macros were source-transformations.

I can't show that Lisp really does make it easier to see this, though.

>> And the complicated syntax of other languages has turned off a
>> number of people who were pleased to discover that Lisp was different.
>> So there is anecdotal evidence on both sides.

>The issue originally was

The issue here is Lisp syntax.  If we try to discuss too much in each
message they'll be too long for anyone to read.

>> >> ... Semantics and
>> >> paradigm are much bigger issues when writing sizable programs.  As
>> >> programs become larger, syntactical issues retreat in importance and
>> >> issues such as namespace control and other modularity issues dominate.
>>
>> >Agreed - but syntax and in particular readability remain relevant when
>> >someone has to maintain code written by others.
>>
>> I suppose someone has to say it.
>>
>>    Semantics and paradigm are much bigger issues when writing or
>>    maintaining sizable programs.  As programs become larger, ...
>>
>> Do I have to continue?
>
>I only said "remain relevant". What have I done wrong?

It looked like you thought the situation was significantly different
when you took maintenence into account.  But you gave no reason.
And, indeed, that was all you said in response to the claim that
syntactic issues retreated in importance for large programs.
I thought the claim deserved a better response.

>At first I agreed with your comment. But now it strikes me that no
>matter how good the "semantics and paradigm" are, you can make the
>language less useful than an intrinsically inferior one by choosing
>a dreadful syntax.  [remarks about semantic conparisons omitted]

But the issue is _Lisp_ syntax, not some hypothetical "dreadful" one.

>> ...Remember that almost all of us became used to
>> conventional mathematical notation, and even conventional programming
>> languages, first.  Most people see 3+5, and "if ..., then ..." long
>> before they see (+ 3 5) or COND, so I think your theory about what
>> people are used to has little to recommend it.
>
>I did not say that nobody could be converted to lisp after learning
>a different syntax.

You said:

  I can well imagine that someone brought up on and/or very used to
  Lisp will find the extra syntactic clutter [in other languages]
  objectionable whereas for others it might make the language far
  more approachable

The issue was not whether someone could be converted to Lisp, but
whether people people who preferred Lisp-style simplicity did so just
because they were used to it.  That may be true in some cases, but it
leaves unanswered the question of why anyone wants to use Lisp in the
first place, especially if what one's used to is so important.  (No
one starts off being used to Lisp.)

>> >I suspect that what is important is that there is a well defined
>> >parse-tree or similar structure that can be derived from the source
>> >code and manipulated by program. Whether the external form makes the
>> >parsing totally trivial (because it is all based on parentheses) or
>> >not is probably far less important.
>>
>> The point isn't just that Lisp's syntax is simple.  It's also
>> significant that there's a direct correspondence between code and
>> lists.  There's no need to learn about an additional intermediate
>> structure.

>Yes that may be significant, in the sense that it simplifies some of
>the tasks. I was simply trying to argue that it wasn't "critical",
>for the reasons given. Maybe "critical" has a stronger meaning for
>me than for you.

Anyway, my point is that the triviality of parsing isn't what is
claimed to be better about Lisp.  It's, in part, that the parsing
doesn't happen at all (ie, the correspondence is direct).

"Critical" aside, you seem to think an intermediate parse tree 
structure is good enough.  But it it?  Maybe it makes an important
difference if the correspondance is more direct.

>> ......So your point about
>> the virtual machine is a point in favor of virtual machines, perhaps,
>> by hardly a point in favor of (or against) Pop-11.

>I am sorry to give so much offense without intending it. I was merely
>responding to the apparent claim that there was ONLY the lisp way
>of doing certain things (i.e. using lisp macros to provide language
>extensions.)

I don't see where offense comes in.  I wasn't offended.  But things
often sound more extreme in these electronic messages than they were
intended to be.

But I don't think you had more in mind than showing Lisp wasn't the only
way to do things.  If all you wanted to show was that Lisp wasn't the
only way to get language extensions, you could have done it in one
line, like "C has macros.".  You also wanted (at least) to tell us
something about Pop.

>> >In Pop-11 a lot of program manipulation makes use of the fact that
>> >you can "partially apply" a procedure to some data (including other
>> >procedures) to create a new procedure.
>>
>> In Lisp, we tend to call them closures.

>Also in Pop, but I deliberately did not use the word in case anyone
>confused partially applied procedures with lexical closures, which
>though useful, are, I believe less well structured (e.g. since a
>lexical closure can bind any non-local variable in a procedure).

I think you are wrong about lexical closures being less structured.

Perhaps it matters how you think about them.  Lexical closures don't
bind non-local variables.  There's already a binding, and, in the
procedure, a reference to it.  All the lexical closure does is to
give that reference indefinite extent.  That is, it doesn't become
invalid just because the expression that introduced it that non-
local variable exits.

>> >·····@Sesame.Stanford.EDU (Mike Shaff) writes:
>> >> For people who need to write code that performs analysis of one
>> >> form or another, simulates, etc a program the "code as data"
>> >> quality of Lisp is critical to their effort.
>
>Would lexical closures be used for performing analysis, simulating
>etc?

Would Pop closures?  You brought in closures, and I just
responded to what you said.

>I suspect from your comments on funargs and lexical closures that we
>interpreted Mike Shaff differently. E.g. I think you took him to be
>talking about the kind of procedural abstraction using closures [...]

Actually, I did not interpret Mike Shaff in the way you suspect.
You introduced closures, and that's why I commented on them.

>But perhaps someone with enough initiative and resources could use
>the recent publication of the book on Computational Linguistics by
>Gazdar and Mellish in three versions (Pop-11, Prolog and Lisp) as an
>opportunity to investigate how people who know none of these
>languages react to the different versions.

I've looked at the three versions and would not regard this as a valid
comparison.  It would still be interesting, but I don't think the
different versions are sufficiently equivalent for language choice
to be the only factor that's significant.  There are lots of different
ways to write the Lisp code, for example.

-- Jeff
From: Jeff Dalton
Subject: Reading Lisp: parens and indentation
Date: 
Message-ID: <526@skye.ed.ac.uk>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron Sloman) writes:
>····@aiai.ed.ac.uk (Jeff Dalton) writes:
>> Now, it's interesting that you're supposing greater cognitive
>> effort is required when reading Lisp, since many people have
>> found that just the opposite is true.

>I was assuming everyone agreed that syntactic redundancy (within
>reasonable bounds) reduces cognitive effort.

Then why do so many people find that there is *less* effort involved
in reading Lisp.  There still needs to be an answer to this.  It
should be clear, I think, that redundancy isn't all there is to it.

>Indentation is used by lisp programmers in an attempt to increase
>redundancy for that very reason.

Programmers don't think "I'd better add some redundancy", and it
may be that their view of things is a more enlightening one.

Indentation is used in an effort to make programs more readable.
It works because it gives some information in a different, more
readable, way.  It's not just an increase in redundancy, because
it's goal is to make it possible to ignore the parentheses, not
to give you two ways to "read" them.  Now, in some cases there's
redundancy.  For example, you know that CAR takes one argument,
and the indentation also shows that the (one) argument goes with
CAR.  But in other cases, it's the indentation alone (given that
we're ignoring the parens) that says what goes with what.

Besides redundancy can get in the way endbesides.

>But I previously pointed to local ambiguities that indentation alone
>does not resolve. (E.g. indentation as such does not tell you, as a
>keyword does, that some local variables are being declared.

Sure it does:

   (labels ((f (x) (+ x 1))
            (g (x) (f (f x))))
     (f (g x)))

The indentation is the main signal that f and g are being "declared"
in the first two lines.

>> You also seem to be ignoring the points made about indentation.
>
>Not ignoring them. I think indentation helps (even in richer
>languages). But the point about indentation ignored my point about
>needing to use extended context to disambiguate. Indentation gives
>information about relative scope - but you still have to look upward
>to see that "((" means "elseif" rather than something else. And as
>you say, short procedures help.

I din't ignore your point.  Indeed, I replied to it, and you
quoted my reply in your message (see below where I quote the
quote).

It is true that keywords, like "then" and "else" can make it easier
to keep one's place.  And it's easy to add such keywords to an IF
macro.  Franz Lisp had them.  Many Lisp programmers don't use this
approach, but it can be, and has been, done.

However, you don't have to look up (to see the "COND"?) to see that
"((" means "elseif".

First, consider Common Lisp.  Common Lisp has some redundancy (you
might call it) that Scheme lacks, namely it requires FUNCALL when
the function to be called is determined by some expression rather
than just a name.  Consequently, "((" is very rare in Common Lisp.

But, really, "((" doesn't mean "elseif".  Rather, it's being in a
certain position in a COND that means "elseif".  If you jump into the
middle of a long COND, the "((" may be an important clue to where you
are.  But if you start reading from the beginning, it's less
important.  Then you just remember that you're in a COND, and
everything, not just the "((", tells you where you are.  You know
you've just finished one clause and hence that the next line will
begin a new one.  And, sure enough, the indentation gets less and
there you are.

>> When reading well-formatted Lisp, it is almost never necessary
>> to read the parentheses;

>Has anyone tried using a lisp using only indentation and no
>parentheses of the kind that you don't need to read? It would be an
>interesting experiment, though I fear the result might be Logo.

Lisp uses a Polish prefix notation.  If the number of arguments for
each operator were fixed, there would be no need for parentheses.
Without parentheses, you'd write functions like this:

   defun factorial x
     if = x 0
        1
        * x factorial - x 1

Now, it's clear that the parentheses add something.  But, to make
my point more precisely, they don't have to be read one-by-one.
It's not as if each were a "begin" or "end".  They're smaller
and less significant.

Lisp programmers are not constantly thinking "this parenthesis closes
the call to f" as they read.  Groupings are made in part by indentation
and in part by the shapes of parenthesis groups.  It's not necessary
to read every parentheses individually.

I've seen some people add redundancy to closing parens by putting
them on a line by themselves along with a commment saying what they
close.  I find the result *much* harder to read than one that puts
less emphasis on the "end brackets".

>> .....and if functions are short (as they
>> most often should be), it's easy to keep one's place without
>> syntactic helpers like "then" and "endif".
>
>I agree that short lisp function definitions are easy to read.

Long procedures are harder to read in any language.

>I guess the lisp code I have found hard to read was not written
>by well-trained lispers (even though some of them wrote text
>books)

It's not the case that everyone finds the same things easy to read.
Still, no doubt one has to become used to Lisp before it becomes easy.
But that much is true of all programming languages.  It may be a
greater problem with Lisp, but since we lack hard data I don't think
we can say much more than that.

There are a number of reasons why Lisp code in textbooks might be
hard to read.  For one thing, much of it is in upper case.  For
another, much of the code isn't well-written.  Moreover, many
texts are not well-formatted, use unreadable fonts, etc.

-- Jeff
From: Rob MacLachlan
Subject: Re: Reading Lisp: parens and indentation
Date: 
Message-ID: <5172@pt.cs.cmu.edu>
Although I am quite happy with Lisp syntax, I believe that predjudice
against Lisp syntax is very real.  I think that this may be a major reason
behind the "ghettoization" of the Lisp community.  I know an experienced
Lisp programmer who was doing some non-lisp-specific work on programming
SIMD processors.  Pursuant to this, he had to design an implement a new
langauge.  He initially used Lisp syntax, since he was comfortable with it,
and it allowed him to use the lisp reader as his parser, but he found it
difficult to get anyone without a Lisp background to read his paper.  He then
modified a parser generator to emit Lisp code, changed his syntax and had a
much better reception.

I do believe that Lisp syntax is initially somewhat harder to comprehend
that more conventional syntaxes, although this might be less of a problem in
a simple teaching subset that omitted warts such as COND.  But I also
suspect that Lisp syntax is something of a local optimum, given the
advantages of easy code manipulation.  The thing is, that the list
representation of Lisp has a moderate similarity to the underlying
semantics.

Modern Lisp compilers have moved away from using lists as the internal
representation for code.  The most important shortcoming of lists is that it
is difficult to annotate a list with the result of any semantic analysis.
It would be useful for users to be able to write "macros" that transformed
these richer intermediate representations, but this is difficult because the
richer representation is harder to comprehend.

  Rob MacLachlan   CMU Common Lisp group
-- 
From: Stephen Knight
Subject: Re: Re: Code as data (replies to comments).
Date: 
Message-ID: <1350020@otter.hpl.hp.com>
The discussion on the merits and demerits of Lisp syntax touched a couple
of interesting points that I'd like to comment on.

1.  Code as Data, Macros and So On

Macros in languages such as C or Pop11 are quite different beasts to that
supported in Lisp(s).  Lisp does not make any distinction between parse and
lexis so that macro processing in Lisp is parse tree procesing.  In other
languages, macro processing is typically performed at the level of a flat,
unstructured input stream.

It should be plain that this makes Lisp macros very much more powerful and
useful (when used in a disciplined fashion).  However, the lack of a distinct
lexis phase, the read-table being a poor substitute, rather limits the 
syntactic flexibility of macros.  It's usually possible to achieve the effects
required by abusing the read-table but a separate phase of chunking the 
input stream into tokens (as in Pop11) would be beneficial.

To perform the kind of parse-tree processing that Lisp does, it is clear that
some kind of data-representation of code is required.  The questions that are
raised are (a) is it useful to show this relationship in the syntax? (b)
if it is, does this mean that some kind of prefix syntax (or variation)
is inevitable?

I cannot convince myself that it is valuable to expose the parse-tree 
representation in the syntax.  It is convenient in some sense, because only
one thing has to be learnt.  It is inconvenient in another sense, because
the syntactic presentation of programs should be geared to people rather than
an internal representation.

However, even if one believes that the price is right, there is no reason
to take the s-expression route.  A good counter-example is provided by the
(DEC-10) syntax for Prolog.  Prolog programs are also Prolog data structures
but the syntax is attractive.  The solution in Prolog is to arrange for the
syntactic properties of the data constructors to be the same as the 
syntax of the program.

2.  Why Don't Lispers Use PreProcessors?

A earlier comment suggested that the general disinterest in preprocessors
reflected satisfaction with Lisp's syntax.  That story doesn't match my
own.  

I am inclined to think that the main reason for disinterest is that the
benefits of the superior syntax are outweighed by non-language disadvantages.
Firstly, any program I deliver using this syntax has even more development
environment captured (yuk).  Next, the error reporting typically exposes
teh fact I am using a preprocessor.  Some of the preprocessors are based on
the Lisp reader so that one gets all the disadvantages.  There are no standard
textbooks I can read to learn this new syntax.  When I deliver a program to
the person who has the unenviable task of maintenance, they are unlikely
to know this new syntax.  

3. Is Lisp Syntax Perfect?

As far as I am concerned, any language which makes me write
    (+ x (* y z))
has made a rather dubious compromise.  None of the respondents to the
original posting could find any fault with the syntax!  Outside the Lisp
community, the syntax is widely considered to be a laughing matter.  Surely
Lispers generally have a more balanced view of the compromises made in the
syntactic design.

One doesn't have to go very far before noticing some very strange things 
about Lisp syntax.  (These comments are a bit loose because different Lisps
have different syntactic details.  Forgive.)  

Here follows a non-exhaustive list of funnies in Lisp syntax.  You can skip
this if you've got the idea.

For example, the basic form
of a function application is (f x y).  However, this shape can occur in
many places with radically different meaning e.g. a let binding.  Far more
logical, I believe, would be to have a key word (e.g. apply) which would
notify the reader that this was an apply node.  This would have an additional
benefit that should the language be extended with new syntactic forms (special
forms) there would be no name clash.  Of course, this would be clumsy and
verbose to write (and read) points out  that syntax is not just a matter of
taste.  The idea of an 'apply special form' is good for the machine but bad 
for people.  

Another example would be the overloading of #.  Why have #'(lambda ...) at 
all?  Surely having (lambda ...) denote a function is a perfectly simple
concept that fits well with the idea of a parse tree, and so on.

Why denote true and false by #T and #F?  Why bother to denote them at all?
One could simply use variables "true", "false", and "nil".  Why clutter up
the syntax so pointlessly?  

Why denote vectors with #( ... )?  Why not use different brackets?  How is
#( ... ) any more tightly bound to the parse tree than, say, { ... } (as in
Pop11).  Indeed, why write lists as (list E1 .. En) rather than, say,
[E1 ... En]?  After all, if one can have #( ... ) for vectors, why shouldn't
lists have equal priviledge?

On the topic of macros, there is actually a need to write in a disciplined 
fashion.  One might be tempted to think that
    (f E1 ... En)
means apply f to the results of E1 to En.  However, if "f" is a macro, there
is no obligation for it to evaluate E1 to En, at all.  It might quote bits
of them, evaluate bits of them, or anything at all.  Furthermore, any
errors reported will be described in terms of the resultant code -- which 
you may well be completely unaware of.

Why are atoms quoted with a single quote at the start?  Why not have a quote at the end as well?  Most people are used to quotes matching.  Brackets match,
after all.  Why not quotes?  String quotes match (of course). 

4.  Summary

As far as I can see, these points (and endless others) add up as follows.
Lisp's syntax is a compromise between uniformity and readability.  There
are many inconsistencies which reflect the growth of Lisp's syntax away
from the machine and towards the user.  In order to satisfy both the desires
of structural uniformity (for the machine) and readability (for the human)
a distinction between concrete syntax and abstract syntax is valuable.
Almost all language designers have come to this conclusion.  Lisp's syntax
remains an anachronism that inhibits the acceptance of the language in 
many contexts where its superior expressive power would be appropriate.
From: Dan Weinreb
Subject: Re: Code as data (replies to comments).
Date: 
Message-ID: <382@odi.ODI.COM>
In article <·······@otter.hpl.hp.com> ···@otter.hpl.hp.com (Stephen Knight) writes:

   As far as I am concerned, any language which makes me write
       (+ x (* y z))
   has made a rather dubious compromise.  

It's a very important benefit of Lisp that there is no distinction
between "built-in operators" and "user-defined functions".  They are
treated equally, and therefore have the same syntax.  This is
important because (1) Lisp is an extensible language; there must be no
system/user distinctions, and (2) because functions are manipulated at
runtime so frequently.  For example, "mapcar" works the same way with
all functions, built-in or your own.  Meanwhile, Lisp also has the
advantage that nobody has to learn a bunch of arbitrary "precedence
rules"; everything just works in the obvious way.  For people who
*really* want infix arithmetic (usually people writing heavily
math-oriented code in Lisp -- it does happen), some dialects of Lisp
(e.g. Symbolics) provide an infix syntax as well.

							   Outside the Lisp
   community, the syntax is widely considered to be a laughing matter.  

It's gratifying to see that this conversation is being kept on a
rational, intellectual plane.

   One doesn't have to go very far before noticing some very strange things 
   about Lisp syntax.  

In just about all the cases you listed, there's nothing "very
strange"; these things are simply different from what you are familiar
with.  Many choices in language design are essentially arbitrary.  In
some other cases, your point has been answered by previous postings.
In some cases, there are awkward things in mainstream Lisps that are
indeed hangovers from the past and cannot be changed in mainstream
Lisp for compatibility reasons, but none is important.  Some are fixed
in Scheme, which doesn't have those constraints.

   Why denote vectors with #( ... )?  Why not use different brackets?  

Why use brackets?  Why not use #( ... )?  The actual reason is that
brackets were intentionally reserved for user extensions; all system
extensions were put on # in order to help keep out of the way of the
user.  One of the unusual design criteria of Lisp is that people often
use it to create their own embedded languages, and this little
decision makes it a bit more convenient for some of them.

   On the topic of macros, there is actually a need to write in a disciplined 
   fashion.  One might be tempted to think that
       (f E1 ... En)
   means apply f to the results of E1 to En.  However, if "f" is a macro, there
   is no obligation for it to evaluate E1 to En, at all.  It might quote bits
   of them, evaluate bits of them, or anything at all.  Furthermore, any
   errors reported will be described in terms of the resultant code -- which 
   you may well be completely unaware of.

Yes, that's inherent in the concept of an extensible language; the
reader of a program has to know the extensions.  Often macros have
names chosen according to certain conventions, to make them easy to
recognize in some common paradigms.  You also have to know the Lisp
special forms like "quote" and "defun".  The only way to "solve" this
"problem" would be to take away one of Lisp's most important features:
true uniform extensibility.  If you don't want a uniformly extensible
language, use some other language.

   Why are atoms quoted with a single quote at the start?

Everything is quoted with a single quote at the start, not only
"atoms".  There's no need for a close quote; it would be entirely
redundant.  I have never, in my whole life, heard of anyone having
trouble due to this.  You can pick at little issues all day, but if
nobody is actually being harmed, they don't carry much weight.

   Lisp's syntax is a compromise between uniformity and readability.  

Of course, I disagree strongly.  It's the most readable thing there
is, in my opinion.  As I said earlier, I certainly agree that it puts
people off, but only because it's unfamiliar, not because there's
anything wrong with it.
From: J A Biep Durieux
Subject: (Long) Code as data (Lisp's syntax - Answer to Stephen Knight).
Date: 
Message-ID: <2727@ski.cs.vu.nl>
In article <·······@otter.hpl.hp.com>,
	···@otter.hpl.hp.com (Stephen Knight) writes:
>The discussion on the merits and demerits of Lisp syntax touched a couple
>of interesting points that I'd like to comment on.

>Macros in languages such as C or Pop11 are quite different beasts to that
>supported in Lisp(s).  Lisp does not make any distinction between parse and
>lexis so that macro processing in Lisp is parse tree procesing.  In other
>languages, macro processing is typically performed at the level of a flat,
>unstructured input stream.

Lisp works in three steps: read, eval and print.  The read step does the
lexical parsing (from string to parse tree), the eval step interprets the
resultung parse tree to an answer (in tree form), and the print step
translates the answer into string form again.

Connected with these three steps, Lisp has read macros, eval macros and
print macros, to steer these processes.  Besides that, one can replace
any of the procedures read, eval and print completely, if one wants to.
Many Lisps support several syntactic styles (parenthesised prefix notation,
mathematical notation, Pascal-like notation, etc.), several interpretation
styles (applicative, imperative-iterative, etc.), and several output styles
(normally matching the syntactic input styles).

So "macro processing" is really three different things in Lisp.

>As far as I am concerned, any language which makes me write
>    (+ x (* y z))	has made a rather dubious compromise.

Lisp doesn't "make you write" that. It just happens to be the default, just
like infix operators don't happen to be the default in Prolog.  One has to
declare them, just like one has to describe ones preferred syntax in Lisp.
Normally that boils down to just declaring that one wants infix notation,
if that is what one wants.
Any problems with e.g. [x+y*z], where the brackets are used to indicate the
switch in syntactic style (that is, if one writes in infix Pascal style
only, no brackets are needed)?

>Another example would be the overloading of #.  Why have #'(lambda ...) at 
>all?  Surely having (lambda ...) denote a function is a perfectly simple
>concept that fits well with the idea of a parse tree, and so on.

This is rather widely agreed on, and e.g. Scheme does things that way.

>Why denote true and false by #T and #F?  Why bother to denote them at all?
>One could simply use variables "true", "false", and "nil".  Why clutter up
>the syntax so pointlessly?  

Because one wants to be able to denote other data types, not just to be able
to write expressions that evaluate to them.  Another thing, of course, is
whether one should have many data types in the first place, but once they
are there they should be denotable.

>Why denote vectors with #( ... )?  Why not use different brackets?  How is
>#( ... ) any more tightly bound to the parse tree than, say, { ... } (as in
>Pop11).

Just to leave room for other syntactic conventions to work together with
the default.

>Indeed, why write lists as (list E1 .. En) rather than, say,
>[E1 ... En]?  After all, if one can have #( ... ) for vectors, why shouldn't
>lists have equal priviledge?

Again you are confusing denotation and evaluation.
(a b c) *denotes* a list with the elements a, b and c.
#(a b c) *denotes* a vector with the elements a, b, and c.

[outside evaluation context. Otherwise the above have to be quoted.]

(list 'a 'b 'c) *evaluates* to a fresh (not used before) list with the
	elements a, b, and c.

>Why are atoms quoted with a single quote at the start?  Why not have a quote
>at the end as well?  Most people are used to quotes matching.  Brackets match,
>after all.  Why not quotes?  String quotes match (of course). 

These are totally different entities.  Brackets and string quotes are lexical
annotations that help in parsing the input.  The quote is a read macro that
stands for a surrounding list with a first element "quote", that tells the
*evaluator* that the following S-expression (be it an atom, a list or a string)
should be taken as data.  A quote at the end would only introduce a possibility
for errors to occur: it might be put at the wrong place.

>The example of '#( ... )' was intended to provoke the observation that
>the choice of '#(' is completely arbitrary and therefore Lisp has a concrete
>syntax, at least in this instance.

But of course it has.  A default syntax, at least: "(" and ")" quite
arbitrarily denote list delimiters, "." denotes a pair half separator,
whitespace separates tokens, tokens consisting of digits only stand for
numbers, etc.

>Lisp has slowly but surely over the years acquired a concrete syntax whose
>relationship with the underlying parser tree is less and less direct.

Well, roughly a syntactic construct per data type. What is "less direct"
about that?

>It should be clear that there is an implicit closer for an open quote --
>namely the token boundary.  If the atom being quoted contains a token
>boundary in its text it needs quoting in a more conventional manner with
>string-like quotes.  As this is only an occasional requirement, Lisp has
>a second quoting convention rather than only one.  (It has at least
>three, of course.)

Quoting occurs at the parse-tree level (to be more exact: at the compilation
or evaluation instance), and the concept of "token" doesn't exist there.
I get the feeling you are mixing levels: at the same time you seem to be
talking about token delimiting (as with vertical bars) and evaluation
prevention (with the quote construction). Quote is like lambda or defun.

Look at these examples:

(define (a\ function x) (car x))	-->	|a function|

(|a function| (quote (a b c d e)))	-->	a

(\a\ \f\u\n\c\t\i\o\n ''x)		-->	quote

(1) |a function| is an atom, and its token representation is unimportant at
    evaluation time.
(2) Quoting may be done by writing "quote" explicitly.
(3) If the quote-readmacro is used, in the parse tree the word "quote"
    still appears.
(4) Token delimiting doesn't automatically quote the token

>The absurdity is that a quote is chosen for the implicit close convention and
>a vertical bar for the open-close convention.  The point here is that, for a
>beginner, Lisp is dominated by bracket-pairs which must be matched exactly.
>But almost immediately, the beginner is confronted with a quote (a close
>quote, in fact) that doesn't need opening.  The impression of arbitrary rules
>and poor choices is established almost immediately.

I can only interpret this if I assume that you indeed were confused about
lexical delimiting and evaluation prevention.
By the way, when you learned about function differentiation, did you feel
the opening quote was missing from f'(x) ?

>All and *only* literals would be quoted.  Thus where one would have written
>    (f 'a 'b 7)	one now has	(APPLY f (QUOTE a) (QUOTE b) (QUOTE 7))
>[So far, this should be familiar to anyone who has bumped into Lispkit Lisp.]  

All literals *may* be quoted already, and *only* literals can be quoted.
Self-evaluating expressions are an optional addition for ease of writing.

>The 'visionary' stage (which I freely admit is no more than a fantasy) is
>then to restructure Lisp standards to accomodate the idea of multiple
>syntaxes.

Have you had expreience with a good Lisp environment (e.g. Interlisp)?
They all have at least two syntaxes.

>Needless to say, the first and original syntax will be there, in all its
>glorious technicolor.  So this should satisfy Lispers who want to stick with
>the current concrete syntax.

Except the standardisation part, this is the current situation.  And see what
happens: almost everybody sticks with the old syntax.

>* If the language is intended for functional programming then I prefer
>  function application to be denoted by expression juxtaposition.  This 
>  doesn't work so well in Lisp because functions can take multiple
>  arguments.  So for this purpose, I think that the standard prefix form
>  that is familiar from mathematics and languages such as FORTRAN and Pascal
>  is appropriate.

Hey, going back to the olden days of M-expressions? What about EVALQUOTE?
No, seriously, I think there is a reason why S-expressions won from
M-expressions once they were introduced.

>* I think it is a point of good syntax design for the formal parameter list
>  to mirror the actual parameter list.  Thus I prefer DEFINE to DEFUN.

Agreed.  In fact, I think (define (f x y z) ...) will win over
(defun f (x y z) ...).  Lispers normally recognise something good when
they see it.  (Indeed, I think it's something good..)

>* The list syntax of Prolog is admirable because it elegantly integrates
>  bracketing with list appending without the need for quoting (as in Lisp
>  and Pop11) and *also* works in a functional context, unlike other Prologian
>  ideas.

But what would a literal list look like, then?  Remember that

(define (foo) '(a b c))
(eq (foo) (foo))		--> #t,

but
(define (bar) (list 'a 'b 'c))
(eq (bar) (bar))		--> #f.

How would you write these two in your notation?

>Examples:

>define append( x, y ) as
>    if x = [] then
>        y
>    else
>        [ x.head | append( x.tail, y ) ]
>    endif
>enddefine

This looks a lot like Interlisp's Clisp syntax.

>The next example is defining 'reverse'.  I'd do it like this.
>
>define fold( op, seed, list ) as
>    if list = [] then
>        seed
>    else
>        fold( op, op( list.head, seed ), list.tail )
>    endif
>enddefine
>
>define reverse( list ) as
>    fold( cons, [], list )
>enddefine

Why not define "fold" locally within "reverse"? You want the user to get
hold of (and possibly redefine) "fold" for some reason?

>And if I suspected the compiler was too stupid to optimise 'fold' into 
>an iterative format, I'd write the following, where -> is assignment from
>left to right.
>
>define fold( op, seed, list ) -> seed as
>    for i in list do
>        op( i, seed ) -> seed
>    endfor
>enddefine

Apart from the left-to-right assignment, I would surely take a look at Clisp.
-- 
						Biep.  (····@cs.vu.nl via mcvax)
	Who am I to doubt the existence of God?   I am
	  only a simple man,  I already have trouble
	enough doubting the existence of my neighbour!
From: Stephen Knight
Subject: Re: (Long) Code as data (Lisp's syntax - Answer to Stephen Knight).
Date: 
Message-ID: <1350022@otter.hpl.hp.com>
> Any problems with e.g. [x+y*z], where the brackets are used to indicate the
> switch in syntactic style (that is, if one writes in infix Pascal style
> only, no brackets are needed)?

Essentially, no.  Lisp's which support this infix syntax are mostly good news.
What I am asking for, however, is a presentation of Lisp with a totally 
different syntax, including printing of expressions, error messages, and 
the support of an agreed standard.

> Another thing, of course, is
> whether one should have many data types in the first place, but once they
> are there they should be denotable.
                 ^^^^^^^^^^^^^^^^^^^
There's no need.  Why clutter up the language?

> >Lisp has slowly but surely over the years acquired a concrete syntax whose
> >relationship with the underlying parser tree is less and less direct.
> 
> Well, roughly a syntactic construct per data type. What is "less direct"
> about that?

A fair question.  In the syntax I sketchily outlined, there is also a syntactic
construct per data type (rather less in fact) with a one-to-one correspondance.
But I don't make the claim that my syntax has an obvious relationship to the
parse tree.  So I don't consider this to be a rebuttal.  I think it is the
role of concrete syntax to present the abstract syntax in a more readable and
inevitably less direct form.

> I can only interpret this if I assume that you indeed were confused 

Well, its a fair cop, guv.  I forgot at the time of writing that '|' is not
a quote at the semantic but only at the syntactic level.  This doesn't actually
affect the underlying argument.  It just makes it harder to understand :-)

> All literals *may* be quoted already, and *only* literals can be quoted.
> Self-evaluating expressions are an optional addition for ease of writing.

And the ease of writing is at the expense of machine tractability as the
two programs 0 and '0 are equivalent.  I frequently find myself writing
canonisation programs to eliminate this (and other) ambiguities.

> Have you had expreience with a good Lisp environment (e.g. Interlisp)?

You're too quick for me!  I've done no serious work on any dedicated Lisp 
hardware, which seems to be virtually synonymous with this.  For reasons
I've already explained, I don't believe it makes sense for programmers to
use infix-syntax *in the current situation*.  What makes my fantasy a fantasy
is that it supposes the situation will change!

> Except the standardisation part, this is the current situation.  And see what
> happens: almost everybody sticks with the old syntax.

Yes, I've seen it happen.  I've used Lisps with multiple syntaxes & no one,
including me, switched.  I tried to outline the reasons in my first letter.

> Hey, going back to the olden days of M-expressions? What about EVALQUOTE?
> No, seriously, I think there is a reason why S-expressions won from
> M-expressions once they were introduced.

I'd be interested.

> >* The list syntax of Prolog is admirable because it elegantly integrates
> >  bracketing with list appending without the need for quoting (as in Lisp
> >  and Pop11) and *also* works in a functional context, unlike other Prologian
> >  ideas.
> 
> But what would a literal list look like, then?  Remember that

This is a very pertinent question indeed.  Semanically, you need a quoting
mechanism.  So what is the syntax going to be?  Pop11 has two suggestions to
make.  Firstly it has a pair of "compile-time" or quoting brackets.  Secondly,
it has a class of identfiers (lexical constants) whose values are established
at compile time.  I will take the first approach for simplicity, as the second
approach requires more explanation.  The quoting brackets will be #< and >#.
(OK, you think they look ugly?  Good, they're meant to be.  They're
potentially dangerous.)  So encoding the suggested examples, and assuming
that word quotes are the single quotes (which match, of course).

> (define (foo) '(a b c))
> (eq (foo) (foo))		--> #t,

define foo() as
    #< ['a', 'b', 'c'] >#
enddefine

> (define (bar) (list 'a 'b 'c))
> (eq (bar) (bar))		--> #f.

define bar() as
    ['a', 'b', 'c']
enddefine

Neither of these looks as cute as the original.  There are two reasons why this
is so, reasons which are designed into the outlined syntax.  Firstly, compile
time constants that are mutable are potential bugs.  Secondly, quoting words
explicitly is relatively uncommon and often poor practice.  The syntactic
overhead of quoting them is tolerable and in-line with the syntax of all other
denotable values.

> [why not locally define fold?]

It's soooo useful.  Otherwise, local definition would be better style.

> Apart from the left-to-right assignment, I would surely take a look at Clisp.

Will it run on HP9000/300 series ?-)  More seriously, thanks for pointing
it out.  I'll try.

Steve Knight
HP Labs, Bristol, UK
From: Jeff Dalton
Subject: Re: (Long) Code as data (Lisp's syntax - Answer to Stephen Knight).
Date: 
Message-ID: <538@skye.ed.ac.uk>
In article <·······@otter.hpl.hp.com> ···@otter.hpl.hp.com (Stephen Knight) writes:
>> Another thing, of course, is
>> whether one should have many data types in the first place, but once they
>> are there they should be denotable.
>                 ^^^^^^^^^^^^^^^^^^^
>There's no need.  Why clutter up the language?

So that values of those types can be printed and read.

>> I can only interpret this if I assume that you indeed were confused 

>Well, its a fair cop, guv.  I forgot at the time of writing that '|' is not
>a quote at the semantic but only at the syntactic level.  This doesn't
>actually affect the underlying argument.

I think it does affect the underlying argument.  

>> All literals *may* be quoted already, and *only* literals can be quoted.
>> Self-evaluating expressions are an optional addition for ease of writing.

>And the ease of writing is at the expense of machine tractability as the
>two programs 0 and '0 are equivalent.  I frequently find myself writing
>canonisation programs to eliminate this (and other) ambiguities.

I would be surprised if many people found this to be a problem.

>Yes, I've seen it happen.  I've used Lisps with multiple syntaxes & no one,
>including me, switched.  I tried to outline the reasons in my first letter.

But you're proposing a Lisp with multiple syntaxes.  I've read your
various letters and don't see how your proposal avoids the problems
you've mentioned.
From: Mike Clarkson
Subject: Re: Re: Code as data (replies to comments).
Date: 
Message-ID: <111@ists.ists.ca>
In article <·······@otter.hpl.hp.com> ···@otter.hpl.hp.com (Stephen Knight) writes:
>As far as I am concerned, any language which makes me write
>    (+ x (* y z))
>has made a rather dubious compromise.  None of the respondents to the
>original posting could find any fault with the syntax!  Outside the Lisp
>community, the syntax is widely considered to be a laughing matter. 

Let them laugh.  Most lisp programmers user Emacs, and consider the
parentheses in properly indented Lisp code to be a valuable debugging tool.

>Another example would be the overloading of #.  Why have #'(lambda ...) at 
>all?  Surely having (lambda ...) denote a function is a perfectly simple
>concept that fits well with the idea of a parse tree, and so on.

This is strictly a Common Lisp problem.  Scheme has first class functions.

>Why denote true and false by #T and #F?  Why bother to denote them at all?
>One could simply use variables "true", "false", and "nil".  Why clutter up
>the syntax so pointlessly?  

#T and #F are reader syntax, and are very difficult to shadow or change
in most lisps.  true false and nil are usually normal variables that
can be rebound or lexically shadowed, at which point all hell can break loose.
In addition, using reader syntax for such basic primitives as #T and #F
means the reader can insert the most efficient syntax for then, usually
saving the overhead of a variable lookup.

> Lisp's syntax
>remains an anachronism that inhibits the acceptance of the language in 
>many contexts where its superior expressive power would be appropriate.

Lisp has survived very well over the years.  I doubt that Lisp will ever
become widely accepted, but for reasons much deeper than the syntax. I'm
not sure that this is a bug; personally I consider it a feature!


Mike.



-- 
Mike Clarkson					····@ists.ists.ca
Institute for Space and Terrestrial Science	uunet!attcan!ists!mike
York University, North York, Ontario,		FORTRAN - just say no. 
CANADA M3J 1P3					+1 (416) 736-5611
From: Stephen Knight
Subject: Re: Re: Code as data (replies to comments).
Date: 
Message-ID: <1350021@otter.hpl.hp.com>
A few quick replies to these interesting comments followed by a longer
exposition of my crazy ideas.

> It's a very important benefit of Lisp that there is no distinction
> between "built-in operators" and "user-defined functions". 

Of course.  But this does not imply prefix syntax (cf. Prolog for counter
example.)  The point I was trying to make was that Lisp has made unsatisfying
compromises, and I gave an arithmetic expression as an example.  Other
languages do not make this compromise and do not seem to suffer any penalty.

> Meanwhile, Lisp also has the
> advantage that nobody has to learn a bunch of arbitrary "precedence
> rules"; everything just works in the obvious way.

At the expense of inflexible and rather odd-looking expressions.  Again, I was
trying to establish the notion that a compromise has taken place.  I would 
certainly agree that the over-use of infix syntax is poor style.

>>    Outside the Lisp
>>    community, the syntax is widely considered to be a laughing matter.  
> 
> It's gratifying to see that this conversation is being kept on a
> rational, intellectual plane.

Maybe you feel the remark was tactless but that's life as I see it,
sorry.  I *promise* that I was not trying to create a slanging match.  I
was only trying to show by dramatic contrast that Lispers are very insular in
their views on the weaknesses of Lisp.  The fact that people scoff at
Lisp's syntax signifies that a real compromise of readability has taken
place.  I appeal to your honesty.  Haven't you noticed people scoffing
at Lisp purely because of its syntax?  I think that's a silly reason to
reject a most interesting language.  I want to accept it as a valid criticism
and propose a route to a better language.

> In just about all the cases you listed, there's nothing "very
> strange"; these things are simply different from what you are familiar
> with. 

Hmm.  I must have structured this a bit poorly.  The idea was to show that
Lisp's syntax is not compatible with the claim that code reflects data.
The example of '#( ... )' was intended to provoke the observation that
the choice of '#(' is completely arbitrary and therefore Lisp has a concrete
syntax, at least in this instance.  The sucession of examples was intended
to hammer this point home.  These observations were meant to lead to this
relatively obvious point.  Lisp has slowly but surely over the years acquired
a concrete syntax whose relationship with the underlying parser tree is
less and less direct.  Given this, I think a better compromise would be to
separate concrete and abstract syntax entirely, in the manner of modern
programming languages.

> One of the unusual design criteria of Lisp is that people often
> use it to create their own embedded languages, and this little
> decision makes it a bit more convenient for some of them.

There are other languages which do this, too.  They don't have prefix syntax.
Furthermore, the introduction of syntax is not an obvious problem in this
regard.

>>    Why are atoms quoted with a single quote at the start?
> 
> Everything is quoted with a single quote at the start, not only
> "atoms".  There's no need for a close quote; it would be entirely
> redundant.

At the risk of seeming ludicrous for discussing trivia in depth, there is
a quite serious point lurking here.

It should be clear that there is an implicit closer for an open quote --
namely the token boundary.  If the atom being quoted contains a token
boundary in its text it needs quoting in a more conventional manner with
string-like quotes.  As this is only an occasional requirement, Lisp has
a second quoting convention rather than only one.  (It has at least
three, of course.)

The absurdity is that a quote is chosen for the implicit close convention and
a vertical bar for the open-close convention.  The point here is that, for a
beginner, Lisp is dominated by bracket-pairs which must be matched exactly.
But almost immediately, the beginner is confronted with a quote (a close
quote, in fact) that doesn't need opening.  The impression of arbitrary rules
and poor choices is established almost immediately.

             -- OK, So what's the alternative(s) --

Rather than raising any more hackles, I'd like to propose my 'vision'
of Lisp with Pascal-like syntax, and an introduction route that hopefully
wouldn't have too many Lisp hackers hanging themselves in the morning.  
In case you haven't guessed, much of this vision derives from Prolog.  (I'm
not a great Prolog fan, by the way, so don't bother knocking it :-))

The two key ideas are to clean up the parse-tree representation of Lisp by
making the forms regular and to divorce concrete from abstract syntax.  A
corollary is that by having a clear intermediate form it becomes relatively
easy to begin defining multiple standard syntaxes -- one syntax being
s-expression like, the other Pascal-like, and perhaps others can be 
considered that also have merit.

Firstly, I suggest representing the parse-tree in s-expression format with
the head of each s-expression being one of a fixed set of atoms denoting
special forms.  Thus where one would have written
    (f E1 ... En)
the 'raw' syntax requires
    (APPLY f E1 ... En)
All and *only* literals would be quoted.  Thus where one would have written
    (f 'a 'b 7)
one now has
    (APPLY f (QUOTE a) (QUOTE b) (QUOTE 7))
[So far, this should be familiar to anyone who has bumped into Lispkit Lisp.]  
It is a matter of taste, whether to go to the next stage and insist that
even variables are tagged with an appropriate atom.
    (APPLY (VAR f) (QUOTE a) (QUOTE b) (QUOTE 7))
As you can see, the correspondance between form and function has been made
excruciatingly obvious.  

Next, one makes EVAL work on this new format -- only one gives it a sensible
name such as 'raw-eval' so that you don't ordinarily fall over it.  The old
eval has to do a little bit more work to massage s-expressions into the
required form.

The vital stage is that Lisp is then defined in terms of this canonical
form.  The alternative syntaxes are going to be defined in terms of READ
functions (parsers) that generate s-expressions in this format.  This will
imply the need for two types of READ -- one that reads in a literal and
one that reads in a program.  Of course, the former is no more than a 
component of the latter.

The 'visionary' stage (which I freely admit is no more than a fantasy) is
then to restructure Lisp standards to accomodate the idea of multiple
syntaxes.  This means separating the standard into two levels, the lower
level defines Lisp in terms of the canonical form, the upper level(s) define
readers for the various syntaxes together with a technique for integrating
the reader into the Lisp interpreter.  (Actually, I imagine this is no more
that ensuring that the 'raw' interpreter takes a parser as a parameter +
the requirement for a notion of current-parser + parser associated with a 
package.  That's easy.)

This would then open the field for standardisation of new syntaxes.  All
the committee would have to do is ensure that it was compatible with the
lower-level of the standard, convince 1 million Lisp programmers that it's
totally brilliant, and a new look is born !:-)  Needless to say, the first
and original syntax will be there, in all its glorious technicolor.  So this
should satisfy Lispers who want to stick with the current concrete syntax.

Of course, I think that *all* languages should be defined this way, not 
just Lisp.  The recent difficulties that the Prolog standardisation group
had with respect to syntax could, I think, have been largely eliminated
by constructing the standard in this manner.  

                -- And what's the payoff? --

Well, what kind of look would I give Lisp?  As I have tried to show that
Lisp's syntax is outdated and wrong-headed, I suppose I ought to put my
own ideas up for demolition.  I won't say I didn't ask for it....

Concepts:
* I admire the philosophy of languages, such as Ada and Pop11, which have
  explicit opening and closing pairs for their major syntactic constructs.
  They also have plenty of padding words, such as 'then', 'else', 'do',
  which improve local readability.
* I also enjoy the use of infix syntax for operators such as assignment,
  arithmetic, boolean operators, and a few other conventional cases.  Although
  user-definable syntax is attractive from the viewpoint of orthogonality
  of syntax, my experience is that it needs strict limiting to be acceptable.
  So I would limit user-defined operators to be of a single precedence level
  and enforce bracketing where there is potential ambiguity.
* If the language is intended for functional programming then I prefer
  function application to be denoted by expression juxtaposition.  This 
  doesn't work so well in Lisp because functions can take multiple
  arguments.  So for this purpose, I think that the standard prefix form
  that is familiar from mathematics and languages such as FORTRAN and Pascal
  is appropriate.
* In imperative languages, it's a good idea to permit each expression
  context to contain a sequence of expression (separated by the ubiquitous
  semicolon (which is optional at the end of sequences)) and to allow
  declarations.
* There are many occasions when monadic operators look best in postfix
  format.  I would borrow the postfix '.' of (C, Pascal, Ada, Pop11,...)
  for this.
* I think it is a point of good syntax design for the formal parameter list
  to mirror the actual parameter list.  Thus I prefer DEFINE to DEFUN.
* The list syntax of Prolog is admirable because it elegantly integrates
  bracketing with list appending without the need for quoting (as in Lisp
  and Pop11) and *also* works in a functional context, unlike other Prologian
  ideas.

Examples:
When these points are taken together, you get a language which looks 
something like this.  Here is 'append' written in this style.

define append( x, y ) as
    if x = [] then
        y
    else
        [ x.head | append( x.tail, y ) ]
    endif
enddefine

(   If you wanted to be cute you could write,
        define append( x, y ) as [|x |y] enddefine
    but that's because append is built into the syntax.  It's just like
    writing
        (defun append( x y ) `( ,@x ,@y ))
    but who'd be so gross?
)

The next example is defining 'reverse'.  I'd do it like this.

define fold( op, seed, list ) as
    if list = [] then
        seed
    else
        fold( op, op( list.head, seed ), list.tail )
    endif
enddefine

define reverse( list ) as
    fold( cons, [], list )
enddefine

And if I suspected the compiler was too stupid to optimise 'fold' into 
an iterative format, I'd write the following, where -> is assignment from
left to right.

define fold( op, seed, list ) -> seed as
    for i in list do
        op( i, seed ) -> seed
    endfor
enddefine

I think that I would follow Pop11 in using a left to write assignment for 
two reasons.  Firstly, it suggests that the expression is evaluated before
the target (and I would insist that that was true).  Secondly, it means that
the formal parameter line can echo the actual call and still put the name
of the function being defined first.  Otherwise I'd have to have written
    define seed := fold( op, seed, list ) as ...
which is OK but less attractive to my eyes.  The for loop construct is picked
out of Pop11, too, of course.  Unlike Pop11, I assume in the example, that 
the loop variable is automatically declared locally to the for loop -- as in
Ada and Algol68, for example.  In addition, the loop variable would be
protected from assignment, as in Algol68, which can be syntactically enforced.

So, these are my crazy ideas.  I think they give Lisp a new and attractive
look, borrow from modern languages the best, and leave the dross behind.
The language is still Lisp, of course, which is the beauty of the thing.

Ah well, time for the flames, I suppose ....

Steve Knight
HP Labs, Bristol
June 11th 1989
---------------------------------------------------------------------------
The views expressed above are not necessarily those of my employers or even
my friends, I am sad to say.  But I am working on it.
---------------------------------------------------------------------------
From: Dan Weinreb
Subject: Re: Code as data (replies to comments).
Date: 
Message-ID: <383@odi.ODI.COM>
In article <·······@otter.hpl.hp.com> ···@otter.hpl.hp.com (Stephen Knight) writes:

   > It's a very important benefit of Lisp that there is no distinction
   > between "built-in operators" and "user-defined functions". 

   Of course.  But this does not imply prefix syntax (cf. Prolog for counter
   example.)  

It implies uniform syntax, which conventional arithmetic notation does not
have.

	      The point I was trying to make was that Lisp has made unsatisfying
   compromises, and I gave an arithmetic expression as an example.  

Which I do not find to be a compromise worth mentioning.

								    Other
   languages do not make this compromise and do not seem to suffer any penalty.

Other suffer being inextensible.  I have already explained this; I won't
go into it again.

   At the expense of inflexible 

Inflexible?  It is as flexible as can be.

				and rather odd-looking expressions.  Again, I was
   trying to establish the notion that a compromise has taken place.  

It's only odd-looking to you.  The only "compromise" is that many
people are not accustomed to it, as I have said over and over again.
Since we appear to be agreeing, I won't send any further postings on
the topic of Lisp syntax.

								      I would 
   certainly agree that the over-use of infix syntax is poor style.

				      Haven't you noticed people scoffing
   at Lisp purely because of its syntax?  

Adult, educated people?  Not in the last ten years, no.  But I have
seen people put off by it, as we both agree.

Yes, you can propose a whole new language that you have just invented,
but it will be very hard to gain a following for any new language.  If
you want to get people involved, the existing Lisp has many advantages
over a newly synthesized language: many people know it, textbooks
exist, implementations exist, there's an ANSI standard committee, and
so on.  New languages can be interesting research work, but from a
practical point of view, it's very, very hard to make them fly.  Even
if they are as logical and elegant as Esperanto.
From: Jeff Dalton
Subject: Re: Re: Code as data (replies to comments).
Date: 
Message-ID: <534@skye.ed.ac.uk>
In article <·······@otter.hpl.hp.com> ···@otter.hpl.hp.com (Stephen Knight) writes:
>A few quick replies to these interesting comments followed by a longer
>exposition of my crazy ideas.

Your message is a very long one, so I don't think I can respond to
every point in it.

It is true that some people dislike Lisp syntax, and find that a
reason to avoid Lisp.  It's true that some people prefer infix
notation and "if"-"endif" bracketing.  But what still needs to be
taken into account is that some people find the Lisp syntax better and
more readable than the alternatives.  They do not regard it as an bad
compromise or even as a good compromise, trading readability for
something else that's worth more.

That other people disagree and say it is a bad compromise doesn't
mean they're right that it's bad.  People disagree about all kinds
of things for all kinds of reasons.  Some things that suffer near
universal ridicule are later proclaimed by all to be True, Right,
and Good.

You suggest that Lisp might be better off if it supported more
than one syntax.  Well, maybe that's true.  But Lisp already lets
one put additional syntax on top of Lisp.

>> It's a very important benefit of Lisp that there is no distinction
>> between "built-in operators" and "user-defined functions". 
>
>Of course.  But this does not imply prefix syntax (cf. Prolog for counter
>example.)

Most languages do not allow the uer to define new infix operators.
Prolog does, but has a more complex syntax as a result.  Sometimes it
works, so that operator declarations make the code more readable,
sometimes it doesn't.

Besides, as far as I know, there's no portable way for a Prolog
programmer to define new operators that are recognized by is/2.

>>>    Outside the Lisp
>>>    community, the syntax is widely considered to be a laughing matter.  
>> 
>> It's gratifying to see that this conversation is being kept on a
>> rational, intellectual plane.
>
>Maybe you feel the remark was tactless but that's life as I see it,
>sorry.  I *promise* that I was not trying to create a slanging match.  I
>was only trying to show by dramatic contrast that Lispers are very insular
>in their views on the weaknesses of Lisp.

No, we are quite aware that some people dislike Lisp's syntax.
But it does not follow from that that those people are right
and those who prefer Lisp's syntax are wrong.

>The fact that people scoff at Lisp's syntax signifies that a real
>compromise of readability has taken place.

No it doesn't.  People scoff at lots of things.  Sometimes they
have good reasons, other times they don't.  Sometimes they have good
reasons but still turn out to be wrong, or bad reasons but turn out
to be right.

>I appeal to your honesty.  Haven't you noticed people scoffing
>at Lisp purely because of its syntax?  I think that's a silly reason to
>reject a most interesting language.  I want to accept it as a valid criticism
>and propose a route to a better language.

So is it a "silly reason" or a "valid criticism"?

[various things omitted]

>>>    Why are atoms quoted with a single quote at the start?
>> 
>> Everything is quoted with a single quote at the start, not only
>> "atoms".  There's no need for a close quote; it would be entirely
>> redundant.
>
>At the risk of seeming ludicrous for discussing trivia in depth, there is
>a quite serious point lurking here.
>
>It should be clear that there is an implicit closer for an open quote --
>namely the token boundary.

That's the end of the token, not the end of the quote.  There's no
need to think of it as some kind of end of quote.

>If the atom being quoted contains a token boundary in its text it needs
>quoting in a more conventional manner with string-like quotes.  

No it doesn't.  There are two conventions for this sort of thing (not
just in Lisp): quotes and escape characters.  Besides, in Lisp the two
so-called quotes are for two different purposes.  That's why you may
have to write:

   '|this is an atom|

If this is explained as (1) the syntax of symbols and (2) literal
verses expression, there's no need for confusion.

>As this is only an occasional requirement, Lisp has a second quoting
>convention rather than only one.  (It has at least three, of course.)

If you call enough things "quoting conventions", you can have any
number of them I suppose.

>The absurdity is that a quote is chosen for the implicit close convention and
>a vertical bar for the open-close convention.  The point here is that, for a
>beginner, Lisp is dominated by bracket-pairs which must be matched exactly.
>But almost immediately, the beginner is confronted with a quote (a close
>quote, in fact) that doesn't need opening.  The impression of arbitrary rules
>and poor choices is established almost immediately.

What is the close quote that doesn't need opening?  Do you just mean
that the character "'" is a close quote?

Besides, beginners don't need to know about vertical bar at all.

>Rather than raising any more hackles, I'd like to propose my 'vision'
>of Lisp with Pascal-like syntax, and an introduction route that hopefully
>wouldn't have too many Lisp hackers hanging themselves in the morning.  

There's nothing wrong with your syntax, for those who like that
sort of thing.  Why don't you try implementing it and see if lots
of people like it.  I'd certainly be interested in trying it out.

>The next example is defining 'reverse'.  I'd do it like this.
>
>define fold( op, seed, list ) as
>    if list = [] then
>        seed
>    else
>        fold( op, op( list.head, seed ), list.tail )
>    endif
>enddefine
>
>define reverse( list ) as
>    fold( cons, [], list )
>enddefine

I guess I still find the following better:

     (define (fold op seed list)
       (if (null? list)
           seed
         (fold op
               (op (car list) seed)
               (cdr list))))

     (define (reverse list)
       (fold cons '() list))

-- Jeff
From: Juergen Wagner
Subject: Re: Re: Code as data (replies to comments).
Date: 
Message-ID: <9402@csli.Stanford.EDU>
In article <·······@otter.hpl.hp.com> ···@otter.hpl.hp.com (Stephen Knight) writes:
>> Meanwhile, Lisp also has the
>> advantage that nobody has to learn a bunch of arbitrary "precedence
>> rules"; everything just works in the obvious way.
>
>At the expense of inflexible and rather odd-looking expressions.  Again, I was
>trying to establish the notion that a compromise has taken place.  I would 
>certainly agree that the over-use of infix syntax is poor style.

Most of this discussion seems to be about syntactic issues. Syntax can be
changed, and if a particular user doesn't like the typical list LISP syntax,
he/she is free to write a different reader (what has been done: take CGOL as
an example). However, there you use different representations for the same
structures: why should I have to read foo(a,b) in one place, and [foo,a,b]
in another? Essentially, what I am dealing with is a list of three elements,
i.e. (foo a b). Pretty-printing (and `pretty-reading') may take care of
cases where the syntax is somewhat counterintuitive to users but again, in
my point of view, that's something application- or user-specific.

I think, every LISP user agrees that twelve or more closing parens at the
end of a function definition are terrible. However, this phenomenon is due
to the fact that LISP doesn't have a large number of syntactically different
styles to present objects. There is exactly one syntactic form, and that is
the LISP concept.

As new data types have been added, several LISP dialects and also CommonLISP
made the attempt to again unify them into a uniform scheme. The result was
the #-macro, which is nowadays used to represent structs, vectors, ...

A little further down in your response, you mention readability. Readability
of LISP programs is comparable to those of conventional programming languages.
In the case of LISP, readability is directly dependent on programming style
and code formatting. Also, the question is "readability for which type of
reader?" Clearly, COBOL programs are readable :-) :-) ...

On the issue of quoting characters: in LISP, everything may be a quoting
character. If I should define '&' as a read macro which takes one argument,
and represents this (unevaluated) argument in a list, i.e.

	&foo => (foo)

this would be an arbitrary convention which serves the purpose it is used
for. Quoting characters such as string quotes "..." and symbol quotes
|...| are arbitrary. But what is not?

I think, we have to make clear distinction between features introduced as
conventions, and those introduced as necessities. The question "why there is
no closing quote" is not meaningful, given the semantics of the 'quote'
readmacro in LISP. Do NOT view "'" as a necessity. It is merely a shorthand
for (QUOTE ...) for those who want it (you notice that we have open/close
parens again). You may define <...> to quote literal objects but that's an
individual matter.

>Rather than raising any more hackles, I'd like to propose my 'vision'
>of Lisp with Pascal-like syntax, and an introduction route that hopefully
>wouldn't have too many Lisp hackers hanging themselves in the morning.  
>In case you haven't guessed, much of this vision derives from Prolog.  (I'm
>not a great Prolog fan, by the way, so don't bother knocking it :-))

Have you heard of POPLOG? (no typo)

>Firstly, I suggest representing the parse-tree in s-expression format with
>the head of each s-expression being one of a fixed set of atoms denoting
>special forms.  Thus where one would have written
>    (f E1 ... En)
>the 'raw' syntax requires
>    (APPLY f E1 ... En)
>All and *only* literals would be quoted.  Thus where one would have written
>    (f 'a 'b 7)
>one now has
>    (APPLY f (QUOTE a) (QUOTE b) (QUOTE 7))
>[So far, this should be familiar to anyone who has bumped into Lispkit Lisp.]

This requires a read-time decision as to whether something is interpreted as
a function call or as a list. The user may not want to make this decision
because he/she would like to defer this until the respective expression is
actually used. In the case of macros, the user of those macros may have no
clue as to whether some or all of the macro's arguments are going to be
used as code or as data.

>It is a matter of taste, whether to go to the next stage and insist that
>even variables are tagged with an appropriate atom.
>    (APPLY (VAR f) (QUOTE a) (QUOTE b) (QUOTE 7))
>As you can see, the correspondance between form and function has been made
>excruciatingly obvious.  

There is no *need* to tag "variables" as such. This is the subtle difference
which distingushes e.g. TurboProlog from *REAL* Prolog, and FORTRAN from LISP.
LISP objects know what type they are of. QUOTE has a different function: it
is a language construct which control evaluation, i.e. something which cannot
be inferred from the symbol itself.

>...
>The vital stage is that Lisp is then defined in terms of this canonical
>form.  The alternative syntaxes are going to be defined in terms of READ
>functions (parsers) that generate s-expressions in this format.  This will
>imply the need for two types of READ -- one that reads in a literal and
>one that reads in a program.  Of course, the former is no more than a 
>component of the latter.

How do you know when to use which reader? In the case of certain macros, this
may be impossible (or you may need a general theorem prover to find that out).

>* I also enjoy the use of infix syntax for operators such as assignment,
>  arithmetic, boolean operators, and a few other conventional cases.  Although
>  user-definable syntax is attractive from the viewpoint of orthogonality
>  of syntax, my experience is that it needs strict limiting to be acceptable.
>  So I would limit user-defined operators to be of a single precedence level
>  and enforce bracketing where there is potential ambiguity.

Why not. Any user may define new readmacros ('open-close' or just 'tag').

>* If the language is intended for functional programming then I prefer
>  function application to be denoted by expression juxtaposition.  This 
>  doesn't work so well in Lisp because functions can take multiple
>  arguments.  So for this purpose, I think that the standard prefix form
>  that is familiar from mathematics and languages such as FORTRAN and Pascal
>  is appropriate.

It is one possible alternative. How about postfix notation? How about a
different model which uses stacks (a la Forth or PostScript)?

>* In imperative languages, it's a good idea to permit each expression
>  context to contain a sequence of expression (separated by the ubiquitous
>  semicolon (which is optional at the end of sequences)) and to allow
>  declarations.

What are declarations needed for? Why not use PROGN? LISP has the advantage
that there is only one paradigm: that of functional application. This 
paradigm is used to implement functions in the conventional sense, macros,
and special forms. I call this a uniform model. In my opinion, special
constructs are not needed for sequences, conditional evaluation, etc. if
all that can be embedded into the functional calculus of LISP.

>* There are many occasions when monadic operators look best in postfix
>  format.  I would borrow the postfix '.' of (C, Pascal, Ada, Pop11,...)
>  for this.

Why not?

>* I think it is a point of good syntax design for the formal parameter list
>  to mirror the actual parameter list.  Thus I prefer DEFINE to DEFUN.

Again: a matter of taste. (defmacro define (&rest args) `(defun ,@args))
Or how about Scheme?

>* The list syntax of Prolog is admirable because it elegantly integrates
>  bracketing with list appending without the need for quoting (as in Lisp
>  and Pop11) and *also* works in a functional context, unlike other Prologian
>  ideas.

The list syntax in Prolog can also be counterintuitive if the [...] notation
has already different meaning in the domain of an application. In such cases,
one may prefer <...> or (...) or {...} or ....

There are more points I should mention but... well...

-- 
Juergen Wagner		   			·······@csli.stanford.edu
						 ······@arisia.xerox.com
From: Stephen Knight
Subject: Re: Re: Code as data (replies to comments).
Date: 
Message-ID: <1350023@otter.hpl.hp.com>
> Yes, you can propose a whole new language that you have just invented,
> but it will be very hard to gain a following for any new language.

Accepted, with the reservation that it's actually only a fraction of a 
new language.

> Even if they are as logical and elegant as ... Esperanto.

And I *hoped* I was going to get a sweetner :-)

Steve Knight,
HP Labs, Bristol, UK
From: Dan Weinreb
Subject: Re: Code as data (Syntax, macros, run-time compilation)
Date: 
Message-ID: <378@odi.ODI.COM>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron Sloman) writes:

   However, this does not require something like Lisp syntax. E.g.
   Pop-11 (following Pop-2) allows macros. The input stream is a
   dynamic list of text items (words, strings, numbers etc) and a macro
   is just the name of a procedure that reads in a portion of that stream
   changes it and then puts back the result onto the front of the list.
   If it contains no macros that is what will be compiled in place
   of the origina,. Lisp syntax makes the reading and rearranging easier.
   Pop syntax gives more freedom for syntactic extensions - e.g.
   expressions and statements don't have to begin and end with parentheses.

What Pop-11 macros are given is like the output of the "lexer" part of
a compiler, i.e. the input to the "parser" part, whereas what Lisp
macros are given is like the output of the "parser" part.  Lisp macros
deal in the equivalent of "parse trees".  If you want to write a macro
that extends the language in an interesting way, such as adding a new
iteration construct, adding a special form to define "rules" or
something, etc., it is far easier if you are given the parser's
output.  To do the equivalent in Pop-11, you'd have to essentially
write your own parser (possibly simpler than that of a full compiler,
but it would have to know a lot about the language).

Lisp macros derive their tremendous power and utility from the way
they transform one "parse tree" into another.  To be able to write a
Lisp macro, the programmer must be familiar with the format of the
parse tree.  Lisp's syntax makes this very easy, because the
correspondance between what you see on the page and how the parse tree
is organized is very simple.  Yes, you could still have Lisp macros if
a Pascal-like syntax were used, but it would make macro-writing more
difficult.  This is a tradeoff that should be considered carefully
by people who are consider alternative Lisp syntaxes.

You mentioned Pop-11's ability to let you use some character other
than parentheses for lists, and things like that.  Lisp has this, too.
It's a distinct facility, known as "reader macros", that let you do
transformations at the lexical level rather than the parse-tree level.
Doing this kind of thing has its drawbacks.  One is that the text
editors that currently exist need to be taught about the reader
macros, so that their language-understanding command will work.
Another drawback is that reader macros obscure the correspondance
between what you see on the page and how the parse tree is organized,
so macro-writers need to be keep in mind what the reader macros are
doing; the more you have to keep in your mind, the harder it is to
program.  For these and various other reasons, good Lisp programmers
only use reader macros sparingly, when there's a lot of benefit to
be had.  But they're there if you need them, and you can use them
as much as you want.  (The Lisp single-quote is typically implemented
as a very simple reader macro.)

By the way, the idea of alternative Lisp syntaxes is not new.  There
was once an attempt to define such a thing, at MIT, way before my
time, called "Lisp 2".  It was so long ago that I have never been able
to find any real documentation on it.  I believe the effort to define
it was abandoned for some reason.  One of the subsequent attempts at
MIT was defined and implemented successfully.  It was called CGOL,
written by Vaughn Pratt, I think sometime around 1976.  There is an
MIT AI Lab memo that describes it, and the source code might very well
be available from the AI Lab, for all I know.  CGOL apparently never
caught on with any user community.  Obviously, that fact by itself
proves nothing about its worthiness, value, etc.  I just wanted to
make the point that the idea has been explored in the past, at least
two times, and therefore probably more times than that.  Just thought
you might be interested.
From: Albert Boulanger
Subject: Re: Code as data (Syntax, macros, run-time compilation)
Date: 
Message-ID: <40913@bbn.COM>
In article <···@odi.ODI.COM> Dan Weinreb writes:

  One of the subsequent attempts at MIT was defined and implemented
  successfully.  It was called CGOL, written by Vaughn Pratt, I think
  sometime around 1976.  There is an MIT AI Lab memo that describes it,
  and the source code might very well be available from the AI Lab, for
  all I know.  CGOL apparently never caught on with any user community.
 

We still use this at BBN. In fact the infix reader for Lisp on the
Symbolics uses a reimplementation of CGOL. (This can be extended
somewhat.) We use the Symbolics reimplementation for reading infix
expressions for a rule structure-editor based on presentations. I
found that writing CGOL "productions" much more natural than dealing
with YACC. I think a key element in CGOL for complex parsing is its
notion of sublanguages (which the Symbolics implementation omits).

Albert Boulanger
BBN Systems and Technologies.
From: Eric A. Raymond
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <1168@ptolemy.arc.nasa.gov>
In article <·····@well.UUCP> ·····@well.UUCP (John Nagle) writes:
>
>On the other hand, the fact that data and programs have the same 
>representation in LISP really doesn't seem to be used all that much
>any more.  It was felt to be terribly important at one time, but today,
>it just doesn't seem to be a big issue.

Or perhaps it just taken for granted nowadays becuase its been
encapsulated (sort of the way GOTO's have disappeared).  

You use macros don't you? 

-- 
Eric A. Raymond  (·······@ptolemy.arc.nasa.gov)
G7 C7 G7 G#7 G7 G+13 C7 GM7 Am7 Bm7 Bd7 Am7 C7 Do13 G7 C7 G7 D+13: Elmore James
From: Mark Rosenstein
Subject: Lisp data and program having same representation still important!
Date: 
Message-ID: <230@hi3.ACA.MCC.COM>
   In article <·····@well.UUCP> ·····@well.UUCP (John Nagle) writes:

   On the other hand, the fact that data and programs have the same 
   representation in LISP really doesn't seem to be used all that much
   any more.  It was felt to be terribly important at one time, but today,
   it just doesn't seem to be a big issue.
 
Ummmm. The only datapoint I have is my own work, and work in the 5
or 6 projects here at work that I know about. I would say all of them
use the fact that data and program have the same representation. I
would be so bold as to exagerate that the only interesting lisp programs
are ones that generate and manipulate structure as programs. [Then compile
'em and use 'em, of course]. Of course elsewhere it may be different.
I find this especially true in object oriented systems like CLOS and 
flavors, where a lot of what happens is that the program is happily
creating new classes and methods for those classes and then using
them.

Mark.
From: John Nagle
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <11898@well.UUCP>
In article <····@syma.sussex.ac.uk> ······@syma.sussex.ac.uk (Aaron Sloman) writes:
>There is some evidence, albeit based only on a number of cases
>rather than systematic research, that it is much easier to convert
>people to Lisp-like ways of thinking if you give them a Lisp-like
>language (with all the benefits of incremental compilation,
>integrated editor, garbage collector, pattern matcher, lists,
>records, arrays, tuples, etc etc) but not a lisp-like syntax.

      I've found LISP as a language quite useful over the years, but
the "integrated environments" are something of a pain.  It's not at
all clear that the development environment and the object being developed
should be as intertwined as they are on, say, a Symbolics or the defunct
LMI machines.  What seems to happen in these systems is that the developed
object is a world, that is, a copy of the entire environment, rather than
a source text.  The state of the system becomes part of the thing being
developed.  This makes maintenance difficult.  In particular, merging the
work of several programmers is rather painful.  

      The Symbolics approach seems to be oriented toward the classic
hacker-type approach to programming.  The AI community sometimes calls
this "exploratory programming", although that term is somewhat in 
disrepute today.  It does not seem to be oriented toward the development
of programs that will be used by others.

					John Nagle
From: Jeff Dalton
Subject: Re: Unix Lisp Environments (why the slow evolution)
Date: 
Message-ID: <497@skye.ed.ac.uk>
In article <·····@well.UUCP> ·····@well.UUCP (John Nagle) writes:
>       I've found LISP as a language quite useful over the years, but
> the "integrated environments" are something of a pain.  It's not at
> all clear that the development environment and the object being developed
> should be as intertwined as they are on, say, a Symbolics ...

Interestingly enough, some people have exactly the opposite view.
They think that programmers don't really like Lisp as a language --
what they really like is the environment.  I have even heard this
said to students by the person teaching them Lisp.

There are also some people who would like to give Prolog an
"environment like Lisp".  This is a less extreme view, but it
also presupposes that Lisp and it's environment can easily be
separated.

> What seems to happen in these systems is that the developed
> object is a world, that is, a copy of the entire environment, rather than
> a source text.  The state of the system becomes part of the thing being
> developed.

The Symbolics and LMI environments do let you write a set of files.
And, indeed, that's where function definitions are kept.  The Xerox
approach is more intertwined, because you don't operate directly on
the files.  You build a world, and then the system tries to write
all of the files for you.

However, whenever you do interactive development it seems to be
possible to have some things work only because files happened to be
loaded in a certain order, and then some computations were done, then
some functions refefined, etc. so that just relaoding the files won't
give you a working system.

But this hardly shows that noninteractive development is better.
And it doesn't mean that your intent can't be to develop a source
text.

> This makes maintenance difficult.  In particular, merging the
> work of several programmers is rather painful.  

I would say that the Lisp environment can be considered something
like dbxtool for C.  Why is it a bad thing if I can change one function
and continue rather than having to recompile, relink, and start over?
That is, I don't think I have to use a Lisp machine to build a world.
It can be used just to write a program.

And, since I don't see how dbxtool makes it harder for programmers
to work together, I don't see why the LispM environment has to do
that either.

>       The Symbolics approach seems to be oriented toward the classic
> hacker-type approach to programming.  The AI community sometimes calls
> this "exploratory programming", although that term is somewhat in 
> disrepute today.  It does not seem to be oriented toward the development
> of programs that will be used by others.

The idea was always in dispute (and so, I guess, in disrepute to
those who didn't like it).  There have long been people (Dijkstra
comes to mind) who think it best to think, write things out on
paper, and even prove them correct, before typing them in.  But
is this always the best way to proceed?

Moreover,if I'm going to write something down, I'd just as soon type
it in.  And if it's typed in, I'd like to test it out.  But, in any
case, just because something is good for exploratory programming
doesn't mean it's no good for anything else.

-- Jeff