From: ············@gmail.com
Subject: modifying array access syntax
Date: 
Message-ID: <1125103005.023291.53740@g43g2000cwa.googlegroups.com>
I do a lot of multi-dimensional array number crunching.  Setting aside
possible performance drawbacks for a moment, Common Lisp appears to
have things I've desired for a long time, in particular the macro
capability, and multi-methods secondarily.  Lisp is advertised as "the
extensible language" and I hope so, because the array access notation
is IMO unacceptable in the native syntax.

C is awkward: a[i][j][k]

fortran is really the most natural of the usual suspects: a(i,j,k)

CL is horrendous: (aref a i j k)

What I would like to do is put CL extensibility to the test by defining
a new array access syntax.  I'd like to start very simple, although I
have a lot of ideas about how to generalize this into something really
nice.  I'd like:

a_i,j,k => (aref a i j k)

Does this call for defmacro or set-macro-character, and why?

Thanks,
Bob

From: Ron Garret
Subject: Re: modifying array access syntax
Date: 
Message-ID: <rNOSPAMon-C1B053.17431226082005@news.gha.chartermi.net>
In article <·······················@g43g2000cwa.googlegroups.com>,
 ············@gmail.com wrote:

> I do a lot of multi-dimensional array number crunching.  Setting aside
> possible performance drawbacks for a moment, Common Lisp appears to
> have things I've desired for a long time, in particular the macro
> capability, and multi-methods secondarily.  Lisp is advertised as "the
> extensible language" and I hope so, because the array access notation
> is IMO unacceptable in the native syntax.
> 
> C is awkward: a[i][j][k]
> 
> fortran is really the most natural of the usual suspects: a(i,j,k)
> 
> CL is horrendous: (aref a i j k)
> 
> What I would like to do is put CL extensibility to the test by defining
> a new array access syntax.  I'd like to start very simple, although I
> have a lot of ideas about how to generalize this into something really
> nice.  I'd like:
> 
> a_i,j,k => (aref a i j k)
> 
> Does this call for defmacro or set-macro-character, and why?

Set-macro-character, because at the very least you need to redefine how 
the reader handles commas, otherwise the above syntax would cause the 
reader to signal an error.  You might also want to redefine how the 
reader handles underscores, although that is not necessary to do what 
you want (and possibly/probably not desirable).

rg
From: Christopher C. Stacy
Subject: Re: modifying array access syntax
Date: 
Message-ID: <uacj4ktdt.fsf@news.dtpq.com>
············@gmail.com writes:

> a_i,j,k => (aref a i j k)

Yuck.

Hey, but if you like it...
From: Nathan Baum
Subject: Re: modifying array access syntax
Date: 
Message-ID: <deokv6$kdc$1@news6.svr.pol.co.uk>
Christopher C. Stacy wrote:
> ············@gmail.com writes:
> 
> 
>>a_i,j,k => (aref a i j k)
> 
> 
> Yuck.
> 
> Hey, but if you like it...

Yes, I think it's a bit hedious as well. ;-)

I made:

   (defmacro with-functional-array-access (array &body forms)
     `(macrolet ((,array (&rest args)
                  (list* 'aref ',array args)))
       ,@forms))

   (let ((a "foobar"))
     (with-functional-array-access a
       (print (a 1))))

I used a macrolet rather than the simpler flet so that setf would work. 
You need to use a non-literal sequence for that. SBCL behaves rather 
strangely if you setf elements of a literal string within a lexical 
boundary.

   (let ((a "foobar"))
     (with-functional-array-access a
       (print a)
       (print (a 1)))
     (with-functional-array-access a
       (setf (a 1) #\a))
     (with-functional-array-access a
       (print a)
       (print (a 1))))

This produces

   "foobar"
   #\o
   "faobar"
   #\o

Note how even though the character has changed according to (print a), 
it hasn't changed according to (print (a 1)). This doesn't happen when 
changing string elements at the toplevel.
From: Juho Snellman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <slrndgvn5q.s3a.jsnell@sbz-31.cs.Helsinki.FI>
<···········@btinternet.com> wrote:
> I used a macrolet rather than the simpler flet so that setf would work. 
> You need to use a non-literal sequence for that. SBCL behaves rather 
> strangely if you setf elements of a literal string within a lexical 
> boundary.

Since modifying literals is forbidden, the array reads are
constant-folded away. SBCL 0.9.4 (released yesterday) will give
compile-time warnings for certain statically detectable attempts to
modify literals. For example on your code:

; caught WARNING:
;   Destructive function SB-KERNEL:%ASET called on constant data.
;   See also:
;     The ANSI Standard, Special Operator QUOTE
;     The ANSI Standard, Section 3.2.2.3
; 
; compilation unit finished
;   caught 1 WARNING condition

-- 
Juho Snellman
"Premature profiling is the root of all evil."
From: Nathan Baum
Subject: Re: modifying array access syntax
Date: 
Message-ID: <deol2t$kdc$2@news6.svr.pol.co.uk>
Nathan Baum wrote:
> Note how even though the character has changed according to (print a), 
> it hasn't changed according to (print (a 1)). This doesn't happen when 
> changing string elements at the toplevel.

And I should also make it clear that it doesn't happen when changing 
elements of strings that you're officially allowed to change the 
elements of.
From: Nathan Baum
Subject: Re: modifying array access syntax
Date: 
Message-ID: <deol7a$kig$1@news6.svr.pol.co.uk>
Nathan Baum wrote:
> Note how even though the character has changed according to (print a), 
> it hasn't changed according to (print (a 1)). This doesn't happen when 
> changing string elements at the toplevel.
And I should also make it clear that it doesn't happen when changing 
elements of strings that you're officially allowed to change the 
elements of.
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125161480.874011.221540@o13g2000cwo.googlegroups.com>
Christopher C. Stacy wrote:
> ············@gmail.com writes:
>
> > a_i,j,k => (aref a i j k)
>
> Yuck.
>
> Hey, but if you like it...

It's not a matter of me liking it.  It's a matter of it being as close
to standard notation in the field as possible within the limits of the
ASCI character set, which doesn't have subscripts, but '_' is as close
to a universal substitute as can be found.

Thanks,
Bob
From: Harald Hanche-Olsen
Subject: Re: modifying array access syntax
Date: 
Message-ID: <pco64trcn86.fsf@shuttle.math.ntnu.no>
+ ············@gmail.com:

| Christopher C. Stacy wrote:
| > ············@gmail.com writes:
| >
| > > a_i,j,k => (aref a i j k)
| >
| > Yuck.
| >
| > Hey, but if you like it...
| 
| It's not a matter of me liking it.  It's a matter of it being as close
| to standard notation in the field as possible within the limits of the
| ASCI character set, which doesn't have subscripts, but '_' is as close
| to a universal substitute as can be found.

But it clashes with Lisp syntax conventions in a big way, and that is
a MUCH bigger problem than merely looking somewhat different from what
other programming languages do.  There is not much point in making
language X look like language Y on the surface, if they're very
different underneath.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Pascal Bourguignon
Subject: Re: modifying array access syntax
Date: 
Message-ID: <87irxsfbq2.fsf@thalassa.informatimago.com>
············@gmail.com writes:

> I do a lot of multi-dimensional array number crunching.  Setting aside
> possible performance drawbacks for a moment, Common Lisp appears to
> have things I've desired for a long time, in particular the macro
> capability, and multi-methods secondarily.  Lisp is advertised as "the
> extensible language" and I hope so, because the array access notation
> is IMO unacceptable in the native syntax.
>
> C is awkward: a[i][j][k]
>
> fortran is really the most natural of the usual suspects: a(i,j,k)
>
> CL is horrendous: (aref a i j k)

If you do a lot of multi-dimentional array number crunching, CL is marvellous:
   [a i j k]

> What I would like to do is put CL extensibility to the test by defining
> a new array access syntax.  I'd like to start very simple, although I
> have a lot of ideas about how to generalize this into something really
> nice.  I'd like:
>
> a_i,j,k => (aref a i j k)
>
> Does this call for defmacro or set-macro-character, and why?

Yes, set-macro-character. Because.


There's a lot of material on the subject in news:comp.lang.lisp. As
always, use google!

http://groups.google.com/groups?lr=&num=10&q=%22set-macro-character%22+group%3Acomp.lang.lisp+aref&qt_s=Buscar

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
You never feed me.
Perhaps I'll sleep on your face.
That will sure show you.
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125161341.098811.321720@g43g2000cwa.googlegroups.com>
Pascal Bourguignon wrote:
> ············@gmail.com writes:
>
> > I do a lot of multi-dimensional array number crunching.  Setting aside
> > possible performance drawbacks for a moment, Common Lisp appears to
> > have things I've desired for a long time, in particular the macro
> > capability, and multi-methods secondarily.  Lisp is advertised as "the
> > extensible language" and I hope so, because the array access notation
> > is IMO unacceptable in the native syntax.
> >
> > C is awkward: a[i][j][k]
> >
> > fortran is really the most natural of the usual suspects: a(i,j,k)
> >
> > CL is horrendous: (aref a i j k)
>
> If you do a lot of multi-dimentional array number crunching, CL is marvellous:
>    [a i j k]

Well that's certainly closer to a workable start.  I wonder why this
isn't in Steele's CL, The Language, 2nd ed.  The only thing I can find
is that this has been rejected as a vector syntax in the index.
Chapter 17 on Arrays doesn't seem to mention it at all.

A small test program fails:

(defvar a)
(setq a (make-array '(2 2) :initial-element 1.4))
(format t "~D,~D: ~F~%" 0 0 (aref a 0 0))
(format t "~D,~D: ~F~%" 0 0 [a 0 0])

What am I doing wrong, or are you suggesting this as an extension that
doesn't yet exist?

Thanks,
Bob
From: Pascal Costanza
Subject: Re: modifying array access syntax
Date: 
Message-ID: <3nbkujFopnbU1@individual.net>
············@gmail.com wrote:

> A small test program fails:
> 
> (defvar a)
> (setq a (make-array '(2 2) :initial-element 1.4))
> (format t "~D,~D: ~F~%" 0 0 (aref a 0 0))
> (format t "~D,~D: ~F~%" 0 0 [a 0 0])
> 
> What am I doing wrong, or are you suggesting this as an extension that
> doesn't yet exist?

It doesn't exist in Common Lisp, but you can implement it yourself.


Pascal

-- 
OOPSLA'05 tutorial on generic functions & the CLOS Metaobject Protocol
++++ see http://p-cos.net/oopsla05-tutorial.html for more details ++++
From: Harald Hanche-Olsen
Subject: Re: modifying array access syntax
Date: 
Message-ID: <pcoacj3cnhb.fsf@shuttle.math.ntnu.no>
+ ············@gmail.com:

| Pascal Bourguignon wrote:
| > If you do a lot of multi-dimentional array number crunching, CL is marvellous:
| >    [a i j k]
| 
| Well that's certainly closer to a workable start.  I wonder why this
| isn't in Steele's CL, The Language, 2nd ed.

Because it isn't in the standard.

| A small test program fails:
| (format t "~D,~D: ~F~%" 0 0 [a 0 0])
| 
| What am I doing wrong, or are you suggesting this as an extension that
| doesn't yet exist?

It doesn't exist because there is no consensus that it should exist.
But it is easily added:

(defun aref-reader (stream char)
  (declare (ignore char))
  (cons 'aref (read-delimited-list #\] stream t)))

(set-macro-character #\[ #'aref-reader)

(set-macro-character #\] (get-macro-character #\)))

In CL, you don't have to wait for some committee to add the syntax you
want.  You just go ahead and add it yourself.  But that said, it is
much easier to add some kinds of syntax than some others: the above
example fits in nicely with the lisp syntax, whereas a_i,j,k does
not.  You would have to redefine a large portion of the Lisp reader to
accomodate it.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: http://public.xdi.org/=pf
Subject: Re: modifying array access syntax
Date: 
Message-ID: <m2y86mpsnd.fsf@mycroft.actrix.gen.nz>
On 27 Aug 2005 19:13:04 +0200, Harald Hanche-Olsen wrote:

> It doesn't exist because there is no consensus that it should exist.
> But it is easily added:

> (defun aref-reader (stream char)
>   (declare (ignore char))
>   (cons 'aref (read-delimited-list #\] stream t)))

> (set-macro-character #\[ #'aref-reader)

> (set-macro-character #\] (get-macro-character #\)))

Use (set-syntax-from-char #\] #\)) instead of that last.  It copies
the read-macro if there is one (i.e., one that just warns you that
you've typed too many brackets), but it does other stuff too.

-- 
Don't worry about people stealing your ideas. If your ideas are any good,
you'll have to ram them down people's throats.
                                                          -- Howard Aiken
(setq reply-to
  (concatenate 'string "Paul Foley " "<mycroft" '(··@) "actrix.gen.nz>"))
From: Harald Hanche-Olsen
Subject: Re: modifying array access syntax
Date: 
Message-ID: <pco4q9a8p1t.fsf@shuttle.math.ntnu.no>
+ Paul Foley <···@below.invalid> (http://public.xdi.org/=pf):

| On 27 Aug 2005 19:13:04 +0200, Harald Hanche-Olsen wrote:
| 
| > (defun aref-reader (stream char)
| >   (declare (ignore char))
| >   (cons 'aref (read-delimited-list #\] stream t)))
| 
| > (set-macro-character #\[ #'aref-reader)
| 
| > (set-macro-character #\] (get-macro-character #\)))
| 
| Use (set-syntax-from-char #\] #\)) instead of that last.

I suppose that is good advice.

[In a whining voice:] But I had copied that bit from an example
in the description of read-delimited-list in the CLHS.

| It copies the read-macro if there is one (i.e., one that just warns
| you that you've typed too many brackets), but it does other stuff
| too.

As far as I can figure out, the only thing extra it does is copy the
syntax type, which in this case will be that of either a terminating
macro character or a nonterminating one.  In the code above, that is
set to a terminating macro chacacter by the (omission of the) first
optional argument to set-macro-character.

But clearly, using set-syntax-from-char is more concise.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Kent M Pitman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <ull2mexti.fsf@nhplace.com>
············@gmail.com writes:

> Pascal Bourguignon wrote:
> > ············@gmail.com writes:
> >
> > > I do a lot of multi-dimensional array number crunching.  Setting aside
> > > possible performance drawbacks for a moment, Common Lisp appears to
> > > have things I've desired for a long time, in particular the macro
> > > capability, and multi-methods secondarily.  Lisp is advertised as "the
> > > extensible language" and I hope so, because the array access notation
> > > is IMO unacceptable in the native syntax.
> > >
> > > C is awkward: a[i][j][k]
> > >
> > > fortran is really the most natural of the usual suspects: a(i,j,k)
> > >
> > > CL is horrendous: (aref a i j k)

Only horrendous if you don't appreciate what function this notation
is serving.

Even if it doesn't strike you as personally The Thing, keep in mind
that the design of the CL notation is intended to allow easy manipulation 
of bounded subforms.  See the first question I addressed in my Slashdot
interview for a worked explanation:

 http://slashdot.org/article.pl?sid=01/11/03/1726251&mode=thread

Once you abandon the constraint that the leading character dictates the
upcoming parse, you might as well change a lot of things about the syntax
because you've rendered useless the service that the design constraint is
offering you.

Note that I'm NOT saying you can't usefully do this.  You can.  And
perhaps should.  I'm just saying in so doing you are either saying
that you have considered and de-valued the reason it is as it is, or
else you have overlooked value in your haste to decide Goodness or
Badness as a kind of Universal Truth, independent of context.  

The designers of Lisp were trying to accomplish something specific in
creating this notation and before one simply does the kind of
superficial comparison above (which seems syntactically to aspire to
the trappings of a researched comparison of sevearl languages, yet
cites neither positives nor negatives for each notation, and which
merely alleges an implicit, canonical, single-minded,
one-size-fits-all Truth), it's often well to stop and think ... just
on the off chance one has the opportunity to something about a culture
that may initially seem foreign before blindly terraforming it as if
it bore a big sign saying "BULLDOZE AWAY - NO INTELLIGENT LIFE HERE".

There are layered notations people have in the past usefully applied to
Lisp, without injury to the underlying system, and without eliminating
in any way the ability to plug and play with that system.  IMO, you're
better of doing such a pre-processor (which, incidentally, you CAN
implement entirely and in Lisp and which you CAN make automatically
activate in normal Lisp files) than piecemeal changing 1 or 2 aspects
of Lisp syntax, invalidating the good it can offer, and then slogging
daily against the lack of coherence in what results.  Something like
CGOL is at least coherently principled rather than an ad hoc patch. viz,

 http://groups.google.com/group/comp.lang.lisp/msg/3b0e2f9228b51a1f

But frankly, I think that with a few minutes thought, one can find that
the Lisp notation is extremely elegant, and that the notational 
symmetry between
 (funcall function i j k)
and
 (aref    array    i j k)
is also important, since in the end, you don't know that function isn't
doing:
 (lambda (i j k) (aref my-secret-array i j k))
or
 (lambda (i j k) (gethash (list i j k) my-secret-hash-table))
or
 (lambda (i j k) (row-major-aref my-secret-1d-array (+ (* dim3 dim2 i) (* dim3 j) k)))
That is, at some level, the difference between an array and a function is
nothing more than implementation, an perhpas something that shouldn't be
exposed.  If anything, the sad part of CL is that it doesn't carry on with what
MacLisp allowed, which was allowing
 (funcall array i j k)
so that arrays could masquerade as functions.  And, as such
all you're really complaining about isn't that the array notation
should be x(i,j,k) not (aref x i j k), but that the function call
notation shouldn't be (funcall x i j k).  And that's either a complaint
about Lisp1/Lisp2, which we've heard before, or else it's a complaint
about paren-notation in general, which we've heard before.

I don't think this discussion is about arrays, though.
If it is, it won't stop there.  It will stop at CGOL.
Hence my cross-reference.

Maclisp, incidentally, also had the ability to define arrays so that
they worked in function space, not requiring the clunky ARRAYCALL
notation [the verbose equivalent of AREF+THE] but working more like a
tightly type-declared, named function. (Function definitions were on
the property list, so every symbol had a heap of properties that had
to be checked, and while there was an optimization that pasted over
the speed of this, there was no way to paste over the awful semantic
effect of being able to "reorder" the property list and hence re-order
how functions were looked up on a per-function basis.  Still, the fact
that, conceptually at least, you could plunk an array in the function
cell was pretty cool.

I _think_ (but am not sure--I don't remember a discussion, I'm just
speculating) the reason that the ability to plop it into a function
cell was gotten rid of may have been that it required every array in
the world to have code in it that picked up the "jump" to it as a
function and fixed that.  This was probably done on the PDP10 by a
PUSHJ to a common subroutine that could pick up its state information
by examining the stack , so probably just required a single
instruction to do, and maybe in Common Lisp the worry was that it
would require too many instructions in one or another architecture and
might overly constrain the implementation of arrays.  Although because
CL is a Lisp2, one could have always noticed the function cell being
set to an array (a statistically rare event) and arranged for it to
get a wrapper around it to make it executable--symbol-function could
know how to peek past that wrapper so it wouldn't have to be
user-visible.  Ah well...  This probably all happened just before I
joined the CLTL design group in 1981 (they'd been going probably for a
year or two before I blundered into membership).  Not that my being
there would necessarily have changed anything--it just explains why I
don't know what the reasoning was.
From: Ray Dillinger
Subject: Re: modifying array access syntax
Date: 
Message-ID: <Mz2We.13675$p%3.58374@typhoon.sonic.net>
Kent;

I just wanted to say thanks, both for this and many many
other little insights into the history and design of lisp
dialects.

I've crawled all over the docs for lots of these "lost
lisps", and observed things I like (such as callable arrays)
and wondered how/why they didn't come forward into modern
lisps.

I've also chased the idea of lisp-1 vs. lisp-2 back into
the mists of time and found that most dialects originally
had symbols with property lists where you could store any
number of keyed datums. "Function cells" and "Value cells"
are in fact remnants of the property lists of these Lisp-N
systems.

But the fact is reading the docs doesn't convey the issues
that people actually ran into using these systems, and your
little aside here about the semantic cheese created when
reordering the property lists made me realize a new aspect
about what these systems were like in practice.

(note:  I think I've decided that properties for symbols
are a win; symbols *should* be able to contain an arbitrary
set of named values.  But I think the interface for it
should leave it possible for it to be implemented as a
hash table instead of a list, and I'm still not sold on
the idea of different evaluation rules for first vs.
other positions in a call, at least not by default).

			Bear
From: Pascal Bourguignon
Subject: Re: modifying array access syntax
Date: 
Message-ID: <87irx3yxpf.fsf@thalassa.informatimago.com>
Ray Dillinger <····@sonic.net> writes:
> (note:  I think I've decided that properties for symbols
> are a win; symbols *should* be able to contain an arbitrary
> set of named values.  But I think the interface for it
> should leave it possible for it to be implemented as a
> hash table instead of a list, [...]

You only need to shadow symbol-plist and get.

But the plist is a feature of the API with a lot of nice properties.
So if you want to store the data in a hash table, you still have to
provide most of these properties, which is costly for SYMBOL-PLIST.
There may be a win for GET, but mind the constants!  A plist with less
than a few keys will be faster than a hash table. 


(defpackage "HASH-PLIST-COMMON-LISP"
  (:nicknames "HPL-CL")
  (:use "COMMON-LISP")
  (:shadow "SYMBOL-PLIST" "GET"))
(in-package "HASH-PLIST-COMMON-LISP")

(defparameter *plists* (make-hash-table :test (function eq))
  "A hash mapping symbols to a hash mapping keys to a stack of shadowed values")
;; we should use a weak hash-table if available.


;; note: plists have the property of being able to hide definitions.
(defun symbol-plist (symbol)
  (let ((h (gethash symbol *plists*))
        (r '()))
    (maphash (lambda (k vl) (dolist (v (reverse vl)) (push v r) (push k r))) h)
    r))

(defun (setf symbol-plist) (plist symbol)
  (let ((h (make-hash-table :test (function eql))))
    (loop
       for (v k) on (reverse plist) by (function cddr)
       do (push v (gethash k h)))
    (setf (gethash symbol *plists*) h))
  plist)


(defun get (symbol indicator &optional default)
  (let ((h (gethash symbol *plists*)))
    (multiple-value-bind (vl p) (gethash indicator h)
      (if p (first vl) default))))

(defun (setf get) (value symbol indicator &optional default)
  (let ((h (gethash symbol *plists*)))
    (multiple-value-bind (vl p) (gethash indicator h)
      (if p
          (setf (first vl) value)
          (push value (gethash indicator h)))
      value)))


(defpackage "HASH-PLIST-COMMON-LISP-USER"
  (:nicknames "HPL-CL-USER")
  (:use "HASH-PLIST-COMMON-LISP"))
(in-package "HASH-PLIST-COMMON-LISP")

;; Now you can use SYMBOL-PLIST and GET as usual.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Grace personified,
I leap into the window.
I meant to do that.
From: Kent M Pitman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <ufys6fj6k.fsf@nhplace.com>
Pascal Bourguignon <····@mouse-potato.com> writes:

> Ray Dillinger <····@sonic.net> writes:
> > (note:  I think I've decided that properties for symbols
> > are a win; symbols *should* be able to contain an arbitrary
> > set of named values.  But I think the interface for it
> > should leave it possible for it to be implemented as a
> > hash table instead of a list, [...]
> 
> You only need to shadow symbol-plist and get.

Well, as you implicitly noted in your code, you also need weak hash
tables to complete the illusion.  So your use of "only" here depends
a great deal on the way in which you're using plists.  Implicit in
the notion of plists is the subtle issue that the plists are garbage
collected if the pointers to the symbols are collected.

In an implementation that is absent weak hash tables, it's perhaps
safer to avoid the abstraction you suggest, and instead to make "hash
functions" (functions closed over a hash table that can do get/set
service operations on it) on a per-property basis, since you can drop
your pointer to the function (and the table) when you're done.
Statistically, if you have a pattern of ever dropping these hash
functions (and hence their tables), you will tend to sometimes at
least, not always, drop pointers to symbols that want to be collected.
From: Kent M Pitman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <ud5nafj0u.fsf@nhplace.com>
Ray Dillinger <····@sonic.net> writes:

> Kent;
> 
> I just wanted to say thanks, both for this and many many
> other little insights into the history and design of lisp
> dialects.
 
I'm glad you and others seem to find it useful.  We're all just here
for a finite time on earth and anything left in our brains at the end
was just wasted effort to acquire.  The story of Galois and the night
before his duel has always impressed me as important in this regard.
Some say the story is exaggerated, but in my view it is impossible to
overstate the importance of the moral of that story.
 Refs: http://motivate.maths.org/conferences/conf66/c66_groups.shtml
       http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Abstract_groups.html
       http://www.maa.org/devlin/devlin_aug.html

> I've crawled all over the docs for lots of these "lost
> lisps", and observed things I like (such as callable arrays)
> and wondered how/why they didn't come forward into modern
> lisps.

Right.  I wish I knew.  I published recent speculation on the ARRAY
issue, but I wasn't around for that decision.  I'll try to ask Steele
and see if he remembers.  As to others, I guess you'd have to name
them each individually.

> I've also chased the idea of lisp-1 vs. lisp-2 back into
> the mists of time and found that most dialects originally
> had symbols with property lists where you could store any
> number of keyed datums. "Function cells" and "Value cells"
> are in fact remnants of the property lists of these Lisp-N
> systems.

Yes, and I think a lot of this was about a programming style in Lisp
where you were focused on associative links between things where at
the outset you didn't know how many you'd need. So the whole design is
"stretchy", accomodating redefinition, and in particular the need for
wider data than you originally started with, but without the 
recompilation step that would happen if you widened a struct.

It was also about the social idea that some of these are system data
structures and some are user data structures, and so you wanted a
paradigm where users and the system could, with appropriate
discipline, play well together.
 
> But the fact is reading the docs doesn't convey the issues
> that people actually ran into using these systems, and your
> little aside here about the semantic cheese created when
> reordering the property lists made me realize a new aspect
> about what these systems were like in practice.

Yes, this was about the GETL operation in Maclisp, which got the first
of several properties.  The interpreter effectively did (GETL X
'(FEXPR EXPR SUBR LSUBR FSUBR ARRAY MACRO)) and then dispatched on
whichever property was first in the plist, but then it ALSO had a
separate table somewhere called a uuolinks table [the name uuolinks is
about as obscure as the origin of the names car/cdr, so I won't try to
explain here] that was used to mediate compiled access to the ones of
these that were meaningful at compile time (FEXPR, FSUBR, and MACRO
having been dealt with at syntaxing time, but EXPR, SUBR, LSUBR, and
I'm pretty sure also ARRAY still being possible runtime uses).  All
calls went indirect through the uuolinks table, which knew the right
thing to jump to.  You could snap the links to accomodate
redefinition, and you could preclude the possibility of snapping them
by removing the indirection if you wanted a little bit of extra speed.
So yeah, a lot of messiness semantically.  And although I can't cite a
specific example because they were hard to construct, people really
did little tricks where they would order the plist so that it would
have a specific total order that was convenient to faking out the
various partial orders of things that were looked at at various
different times.  Note, too, that macros themselves couldn't be
compiled but someone eventually figured out that it worked to put a
symbol in the macro property, causing a funcall on the symbol, and
that the macro could be compiled by compiling that symbol.... and so
on. Maclisp also made heavy use of a (defun (sym propname) bvl . body)
notation for user-defined symbols.

> (note:  I think I've decided that properties for symbols
> are a win; symbols *should* be able to contain an arbitrary
> set of named values.  But I think the interface for it
> should leave it possible for it to be implemented as a
> hash table instead of a list, and I'm still not sold on
> the idea of different evaluation rules for first vs.
> other positions in a call, at least not by default).

Well, this was in fact used, but I agree that it's mostly not a good
idea.  And, in fact, I think the whole access to the plist thing is a
bad idea since it's an opportunity for someone to remove properties
they didn't mean to.  (I think historically it's there so that
programs can optimize the plist for speed, but I think that's less
needed now, and in fact if the plist could invisiblly/dynamically
become a hash table when it grew large, it would be less needed still.)
So I agree with you here.

In Maclisp, I have certainly written algorithms myself that did 
 (LET ((MARKER (GENSYM)))
   (SETPLIST 'FOO (LIST* NEW-MARKER VALUE (PLIST 'FOO))) ...)
in order to not waste the cycles traversing the plist to see if
the gensym was already there (since I knew it was not), and I have
certainly written the corresponding optimization:
 (IF (EQ (CAR PLIST) MARKER)
     (SETPLIST 'FOO (CDDR (PLIST 'FOO)))
     (REMPROP 'FOO MARKER))
in order again to not do a function call to FOO (since CAR and EQ
would compile open). (This is useful when implementing a SUBST-like
operation efficiently.)  But there are other ways to achieve this
kind of efficiency, and one of them is just to use a private hash
table, and then the REMPROPs aren't needed. You just discard the
table.
From: Kent M Pitman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <uvf11wscv.fsf@nhplace.com>
Kent M Pitman <······@nhplace.com> writes:

> > I've crawled all over the docs for lots of these "lost
> > lisps", and observed things I like (such as callable arrays)
> > and wondered how/why they didn't come forward into modern
> > lisps.
> 
> Right.  I wish I knew.  I published recent speculation on the ARRAY
> issue, but I wasn't around for that decision.  I'll try to ask Steele
> and see if he remembers.  As to others, I guess you'd have to name
> them each individually.

I asked Steele:

  Maclisp had funcallable arrays but they were left out in CL.  Do you
  remember why?  I was speculating on comp.lang.lisp that some
  implementor might have worried it would overly restrict their data
  layout and keep them from being appropriately space efficient or
  something like that.  That argument doesn't seem to hold water since
  (a) FUNCALL could have special-cased them in a type dispatch and
  (b) (SETF (SYMBOL-FUNCTION ...) ...) could have introduced a "wrapper"
  around them during storing, since there wouldn't be likely to be many
  and it probably wouldn't be done often enough for the wrapper size to
  matter, just as it probably did for making lists whose car is LAMBDA
  be funcallable under CLTL1.  So I'm at a loss to know any reason other
  than just "no one thought it would be fun" that holds up, unless one of
  these strategies above just wasn't thought of.
  (The same trick would have worked for hash tables.)

To which he replied:

  I don't remember all the details.  I think your strategy (b)
  for wrapping an array was not a strategy I was conscious
  of at the time, but it's hard to be sure after so long.

  I hauled out the Swiss Cheese and Colander drafts,
  a draft Spice Lisp manual, the "Votes on the First Draft"
  document, and the email archives.  Every indication is
  that the decision not to funcall arrays predates
  Common Lisp and probably can be tracked to Spice Lisp
  or (more likely) the NIL project.

He went on to suggest some people involved in those projects that I 
should perhaps ask, so I'll follow that up and report back if I get
back anything.
From: Frode Vatvedt Fjeld
Subject: Re: modifying array access syntax
Date: 
Message-ID: <2hek7pl8sn.fsf@vserver.cs.uit.no>
Kent M Pitman <······@nhplace.com> writes:

>   (a) FUNCALL could have special-cased them [arrays] in a type
>   dispatch and

I'd just like to point out that FUNCALL is a hot candidate for
inlining, and increasing its number of cases from 2 to 3 (i.e. from
dealing with functions and symbols to also arrays) might incur
non-negligible costs in terms of code-size, as well as making
FUNCALL's performance more "skewed": for any typecase the earlier
clauses will be faster than the later ones. If there were several
incompatible storage-layouts for arrays (say, simple-vectors,
basic-arrays and displaced-arrays), both problems are exacerbated.

-- 
Frode Vatvedt Fjeld
From: Kent M Pitman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <umzmdf3fi.fsf@nhplace.com>
Frode Vatvedt Fjeld <······@cs.uit.no> writes:

> Kent M Pitman <······@nhplace.com> writes:
> 
> >   (a) FUNCALL could have special-cased them [arrays] in a type
> >   dispatch and
> 
> I'd just like to point out that FUNCALL is a hot candidate for
> inlining, and increasing its number of cases from 2 to 3 (i.e. from
> dealing with functions and symbols to also arrays) might incur
> non-negligible costs in terms of code-size, as well as making
> FUNCALL's performance more "skewed": for any typecase the earlier
> clauses will be faster than the later ones. If there were several
> incompatible storage-layouts for arrays (say, simple-vectors,
> basic-arrays and displaced-arrays), both problems are exacerbated.

Perhaps, although one way to do the inlining is a simple test for the
fast case and then a trap to the slower case for odd types that were
statistically not predicted.  (Speed would still be achievable for those
who wanted it by using AREF.  We're only talking about people who want
to the flexibility of substituting an array for a function.  Not every
FUNCALL is in a tight inner loop where speed is the thing that matters.)

One could also control this by opting to inline FUNCALL only in
certain cases, e.g., only where space was not being optimized.

IMO, one should first decide what one wants a language to do and then
worry about how to specify cases like this.  The Lisp community has a
tradition of finding creative ways around the efficiency issues that
might seem to prohibit interesting expressive styles.
From: Ray Dillinger
Subject: Re: modifying array access syntax
Date: 
Message-ID: <vTKWe.13946$p%3.60752@typhoon.sonic.net>
Kent M Pitman wrote:

> IMO, one should first decide what one wants a language to do and then
> worry about how to specify cases like this.  The Lisp community has a
> tradition of finding creative ways around the efficiency issues that
> might seem to prohibit interesting expressive styles.

Heh.  I'm actually making *all* datatypes have call semantics
in my toy-lisp (which is actually looking less and less like a
toy ...  I suppose I shall have to name it soon).  There's a
table of functions, indexed by typetag.  So a call to a string
(or an array) indexes into the table by the string or array
typetag to find the function to call (with the particular
string or array as the first argument).  In several cases, like
characters, the function that gets called just signals an error
because there aren't any "implicit calls" for characters.

There's no "skewing" of performance; calls to any non-symbol
involve one dereference (to get the typetag) and one array
lookup, and calls to any symbol involve one dereference (to
get the symbol hashcode) and one lookup in the symbol table,
which in many cases actually takes longer because of inherited
scopes. I haven't done it yet, but in cases where I can
prove what type that first element in the call is going to
be, I could compile so as to skip the lookup and open-code
it (hmm, assuming I leave "typecall" functions constant/
global/ immutable, like language primitives ... which they
are now but I'll have to think about, I guess).

I agree with you that semantics come first.  Although I want
nasty low-level machine code capabilities for defining
libraries and doing hardware I/O and such, I prefer to
design the actual _language_ as though the "sufficiently
smart compiler" were a reality and the hardware almost
completely irrelevant. I think that in the long run it is
better, because then the language won't become obsolete
(along with all the code written in it) when compilers
do get smarter or hardware gets faster or a port to a new
architecture happens.  When I think of all the "dead code"
written unportably for LispM's, I see a language design
mistake I don't want to repeat.

I think the way CL does declarations (and declamations, etc) is
the "right" thing for performance hacks. They never change the
semantics of correct code. Further, they can be ignored/filtered
out/removed when the performance issues they address get more
general solutions, or when future hardware speed increases cause
them to fall below the threshold of whether the user cares.

				Bear
From: Ray Dillinger
Subject: Re: modifying array access syntax
Date: 
Message-ID: <0QWXe.13$u8.49@typhoon.sonic.net>
Frode Vatvedt Fjeld wrote:
> Kent M Pitman <······@nhplace.com> writes:
> 
> 
>>  (a) FUNCALL could have special-cased them [arrays] in a type
>>  dispatch and
> 
> 
> I'd just like to point out that FUNCALL is a hot candidate for
> inlining, and increasing its number of cases from 2 to 3 (i.e. from
> dealing with functions and symbols to also arrays) might incur
> non-negligible costs in terms of code-size, as well as making
> FUNCALL's performance more "skewed": for any typecase the earlier
> clauses will be faster than the later ones. If there were several
> incompatible storage-layouts for arrays (say, simple-vectors,
> basic-arrays and displaced-arrays), both problems are exacerbated.

Avoid skew: maintain a jump table indexed by typetag. In
calling an object of a given type, you apply a mask to that
object to get the typetag, then use the typetag as an index
into the table to fetch an address.  Jump to the address
and you are now executing the "call semantics" of the given
datatype.  The method is very extensible, and takes about the
same time for ten different callable types as for two.

				Bear
From: Kent M Pitman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <u3bo52dz7.fsf@nhplace.com>
Kent M Pitman <······@nhplace.com> writes:

I asked Scott Fahlman at CMU (who created Spice Lisp) the same
question I had asked Steele about how funcallable arrays got lost.
[In my message to him I also alluded to the fact that hash tables
could have been made funcallable, although that was never implemented
in any Lisp I know of.]  Here's his response, in case anyone is
curious:

| Now that you mention it, I do remember using funcallable arrays in
| some version of Maclisp.  I'm not sure about funcallable hashtables.
| And I don't remember if either of these survived into Lisp Machine
| Lisp or Zetalisp or NIL.
|
| I don't recall any discussion of implementing funcallable arrays or
| hashtables in Spice Lisp or Common Lisp, so I think the idea had
| quietly died sometime before this -- or else, the feature was so
| little used that nobody noticed when we failed to include it.
|
| Personally, while we *could* implement funcallable arrays, I don't
| find the idea very attractive.  I think that overloading of that
| kind is cute, but confusing.  I suppose that we could also treat
| strings as functions: with one numeric argument you grab a character
| and with two you grab a substring, and with a keyword argument that
| is the name of a language you translate the string, and with a
| keyword that is :google or :wordnet you look up the string...  Call
| me old-fashioned, but I think it's good to have an explicit function
| to serve as the "verb" in the sentence.
|
| So if the idea had come up back around 1980, I probably would have
| argued against it, but I don't remember doing this.

I should note as a meta-observation that the other issue illustrated
in this exchange is the tension between wanting "technical
capabilities" and wanting "a particular style of expression".  Scott's
remarks here remind me that there were those in the CL design process
who had very strong feelings about what the character of the language
should be, and who wanted certain operations in or out based on
whether it would promote or discourage this or that particular
programming style.  Some people were very neutral on such issues, but
some people had definite opinions.

It's a bit like the US controversy over prayer in the schools, where
you can find a lot of people who think it would be good, but then when
you start to attach specifics to the idea, the apparent consensus 
dissipates.

So the resulting language involved a heavy dose of compromise.
From: Kent M Pitman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <uacicesad.fsf@nhplace.com>
Kent M Pitman <······@nhplace.com> writes:

And here's the reply about what happened to funcallable arrays from
JonL White, who worked on [among many other things] the NIL (New
Implementation of Lisp) implementation for the VAX in the late 70's /
early 80's...

| GLS is right that NIL not only didn't have such a feature for arrays.
| In fact, the NIL gang pressed strongly against the Symbolics clan that
| tried to prevent truly "simple" type arrays from being introduced as a
| type into Common Lisp.  (Witness the "RPG Memorial Vector"
| proposal---as last ditch effort to forestall the hairification of
| "simple" arrays.)  The issue I believe was like this: Arrays are
| simple data structures, and to be competitive against more
| conventional languages, we need to keep them simple enough so that
| access to and from them is compilable in Lisp accordingly (i.e., on
| "stock" hardware, as merely an indexed memory fetch or store.)  With
| special hardware, the two fix-ups you mention are virtually no cost;
| but they would introduce yet another split between the LispM type
| gangs and the "stock hardware" gangs, because the general case
| couldn't be open-compiled merely with an (simple) ARRAY declaration.
|
| The decision must have come down to this: is there such an enormous
| power extended to the language by this merging of ARRAY underneath the
| FUNCTION built-in type, that it is worth the glitch in stock-hardware
| compilation strategies?  And the NIL guys were principally in the
| "stock hardware" gang.
From: Pascal Costanza
Subject: Re: modifying array access syntax
Date: 
Message-ID: <3napiaFm1fnU1@individual.net>
············@gmail.com wrote:
> I do a lot of multi-dimensional array number crunching.  Setting aside
> possible performance drawbacks for a moment, Common Lisp appears to
> have things I've desired for a long time, in particular the macro
> capability, and multi-methods secondarily.  Lisp is advertised as "the
> extensible language" and I hope so, because the array access notation
> is IMO unacceptable in the native syntax.
> 
> C is awkward: a[i][j][k]
> 
> fortran is really the most natural of the usual suspects: a(i,j,k)
> 
> CL is horrendous: (aref a i j k)

However, you can do things with this "notation" that you cannot do as 
easily with the other ones. From the HyperSpec:

(aref (setq beta (make-array '(2 4)
                     :element-type '(unsigned-byte 2)
                     :initial-contents '((0 1 2 3) (3 2 1 0))))
        1 2) =>  1
(setq gamma '(0 2))
(apply #'aref beta gamma) =>  2
(setf (apply #'aref beta gamma) 3) =>  3
(apply #'aref beta gamma) =>  3
(aref beta 0 2) =>  3

Especially the (apply #'aref ...) forms are interesting.

As soon as you start with custom syntax for a concept, you will probably 
later on need to extend the syntax to include more cases to handle 
similar things. (That's why "syntax-driven" languages all eventually 
evolve into the beasts they are.)

> What I would like to do is put CL extensibility to the test by defining
> a new array access syntax.  I'd like to start very simple, although I
> have a lot of ideas about how to generalize this into something really
> nice.  I'd like:
> 
> a_i,j,k => (aref a i j k)
> 
> Does this call for defmacro or set-macro-character, and why?

defmacro doesn't allow you to change the basic structure of 
s-expressions. The basic structure of s-expression is that the first 
element in a list (here: aref) unambiguously determines what the meaning 
of the whole form is. With a defmacro, you need to give it a name and 
later on use that macro name as the first element of a form to "invoke" 
the macro.

In your proposed syntax, you want to put the element that determines the 
meaning of your expression after the first element, and this will 
complicate your solution. If you could live with the notation [a i j k], 
you would still hide the 'aref, but the meaning of the form would 
instead be determined by the first element (here the opening bracket) 
which would be more straightforward to transform into a standard 
s-expression.

You still cannot do this with a macro, and you would still lose the 
ability to apply your new array accessor.


Pascal

-- 
OOPSLA'05 tutorial on generic functions & the CLOS Metaobject Protocol
++++ see http://p-cos.net/oopsla05-tutorial.html for more details ++++
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125163133.148499.6250@z14g2000cwz.googlegroups.com>
Pascal Costanza wrote:
> ············@gmail.com wrote:
> > I do a lot of multi-dimensional array number crunching.  Setting aside
> > possible performance drawbacks for a moment, Common Lisp appears to
> > have things I've desired for a long time, in particular the macro
> > capability, and multi-methods secondarily.  Lisp is advertised as "the
> > extensible language" and I hope so, because the array access notation
> > is IMO unacceptable in the native syntax.
> >
> > C is awkward: a[i][j][k]
> >
> > fortran is really the most natural of the usual suspects: a(i,j,k)
> >
> > CL is horrendous: (aref a i j k)
>
> However, you can do things with this "notation" that you cannot do as
> easily with the other ones. From the HyperSpec:
>
> (aref (setq beta (make-array '(2 4)
>                      :element-type '(unsigned-byte 2)
>                      :initial-contents '((0 1 2 3) (3 2 1 0))))
>         1 2) =>  1
> (setq gamma '(0 2))
> (apply #'aref beta gamma) =>  2
> (setf (apply #'aref beta gamma) 3) =>  3
> (apply #'aref beta gamma) =>  3
> (aref beta 0 2) =>  3
>
> Especially the (apply #'aref ...) forms are interesting.
>
> As soon as you start with custom syntax for a concept, you will probably
> later on need to extend the syntax to include more cases to handle
> similar things. (That's why "syntax-driven" languages all eventually
> evolve into the beasts they are.)

Agreed, but that's also kind of the point.  I am specializing for a
specific domain and want to tailor the general (but loose and verbose)
into something more specific (and tight and concise.)

Let me say a few words about my design thoughts here, and maybe people
can help me see a better direction.

I write codes that operate in 1, 2, and 3 dimensional "modes".  A great
deal of expression can be done in a way that is general to this
dimensionality, but only using various preprocessors like cpp, m4, or
some custom thing on top of C/C++/f90 code, with all the drawbacks this
"detached" preprocessing implies.  CL's macro facility seems a nice
integrated solution here.

One issue is always how to access an element in an array of which the
dimensionality is general.  For example, I may have an expression to
average neighboring elements in a computational "i" direction.  Here's
the fortran code for 1d, 2d, 3d:

1D: 0.5*(a(i) +a(i+1))
2D: 0.5*(a(i,j) +a(i+1,j))
3D: 0.5*(a(i,j,k) +a(i+1,j,k))

What I need is a notation which says "shift the i index by one and
default the rest to unshifted" which then expresses all three
possibilities above.  CL's apply seems the way to approach this.

But big expressions with "apply" is really quite far from how you'd
like to express yourself in a notation that is close to the domain
here.  What I had in mind is something that resembles standard notation
for the field in question, which is a subscripted notation.

a_ => a_i => (aref a i) in 1D
a_ => a_i,j => (aref a i j) in 2D
etc.

a_i+1 => a_i+1 => (aref (1+ i)) in 1D
a_i+1 => a_i+1,j => (aref (1+ i) j) in 2D
etc.

Now you're probably screaming in horror at the infix notation in the
subscript.  Well, tough.  That's how people write these equations on
paper and that's how they should be able to write them in code.  If
this means translating behind the scenes into prefixed lisp, so be it.

This is just scratching the surface, but there's a flavor.

Thanks,
Bob
From: Michael Fleming
Subject: Re: modifying array access syntax
Date: 
Message-ID: <IW2Qe.322$m56.61@newssvr24.news.prodigy.net>
············@gmail.com wrote:

> Now you're probably screaming in horror at the infix notation in the
> subscript.  Well, tough.  That's how people write these equations on
> paper and that's how they should be able to write them in code.  If
> this means translating behind the scenes into prefixed lisp, so be it.

Do people really right equations in prefix? Maybe the very simplest, but 
what I see is a lot of symbols, superscripts, subscripts, bars to 
indicate division, different fonts, prefix operators, and so on. No one 
writes equations on paper in the sort of infix that is fed into compilers.

So switching + to prefix is not really doing much more violence to the 
written equation as changing a division indicated by a bar to an infix 
/, for example.

To make what is happening in the equation clear, it is best to break it 
up into easily comprehensible chunks with descriptive names and then the 
  end product is another easily comprehensible chunk. I think this is 
true whether you are dealing with a language with prefix functions or 
one with infix operators. The problem is to take something inherently 
two dimensional and type it into a teletype. You could even use a Morse 
code key to enter the text.

Why are we still using teletypes at all? Why can't we enter the equation 
into our code directly as the mathematician writes it? This would not be 
hard, really. A new keyboard for the left hand with math symbols on it, 
a graphics pad for positioning, and smart symbols that realize what 
surrounds them. So if you move the pointer to the top right of an 
integral sign, the program realizes you are entering the upper limit.

It's a thought.

Michael
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125189832.503316.257110@f14g2000cwb.googlegroups.com>
Michael Fleming wrote:
> ············@gmail.com wrote:
>
> > Now you're probably screaming in horror at the infix notation in the
> > subscript.  Well, tough.  That's how people write these equations on
> > paper and that's how they should be able to write them in code.  If
> > this means translating behind the scenes into prefixed lisp, so be it.
>
> Do people really right equations in prefix?

In subscript expressions they certainly do.  i/2 would be preferred to
the horizontal bar notation in a subscript.

Thanks,
Bob
From: Michael Fleming
Subject: Re: modifying array access syntax
Date: 
Message-ID: <iJ8Qe.5268$Z87.5144@newssvr14.news.prodigy.com>
············@gmail.com wrote:
> Michael Fleming wrote:
> 
>>············@gmail.com wrote:
>>
>>
>>>Now you're probably screaming in horror at the infix notation in the
>>>subscript.  Well, tough.  That's how people write these equations on
>>>paper and that's how they should be able to write them in code.  If
>>>this means translating behind the scenes into prefixed lisp, so be it.
>>
>>Do people really right equations in prefix?

Sorry, I meant "write in infix", of course. I even spell checked the post.
> 
> In subscript expressions they certainly do.  i/2 would be preferred to
> the horizontal bar notation in a subscript.

You'll see short equations, and parts of larger equations in infix of 
course! But even in your example, you are talking about a subscript, 
which has a two dimensional side to it. Beyond a certain level of 
complexity, mathematicians use all sorts of devices to make the meaning 
of the equation clear. I think it's a real stretch to say that "people 
write these equations" in infix. You've got to translate a math 
expression on a page that uses many typological and spatial devices into 
a one-dimensional, mono font string anyway. So it's a small additional 
step to make everything prefix. You soon become accustomed to it.

But who cares? I'm with you. I want to enter the mathematical notation 
as it exists in the math books directly, exactly as it exists there, and 
not go through any translation phases. That is, math notation straight 
into my program and to the compiler.

Is anybody doing this? There are no technological hurdles that I can see.

Michael


> Thanks,
> Bob
> 
From: Harald Hanche-Olsen
Subject: Re: modifying array access syntax
Date: 
Message-ID: <pcozmr27924.fsf@shuttle.math.ntnu.no>
+ Michael Fleming <·····@pacbell.net>:

| I want to enter the mathematical notation as it exists in the math
| books directly, exactly as it exists there, and not go through any
| translation phases. That is, math notation straight into my program
| and to the compiler.
| 
| Is anybody doing this? There are no technological hurdles that I can see.

One problem is that mathematics notation is ambiguous.  x� could mean
the square of x, but it could also be the second component of a vector
x.  The latter is common notation in differential geometry.  The
Einstein summation notation may or may not be in force.  Examples of
this nature abound.  (Mathematicians like to write sin 2x rather than
sin(2x), for example.)  Mathematical texts usually have enough
contextual information to disambiguate the notation; you would have to
provide the same sort of information to your compiler.

Some computer algebra systems have already started down this path.
Maple lets you enter formulas as they are written, though I haven't
gotten used to it and much prefer the linear notation, if only because
it avoids reaching for the mouse umpteen times to write a simple
formula.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Michael Fleming
Subject: Re: modifying array access syntax
Date: 
Message-ID: <IRiQe.5285$Z87.14@newssvr14.news.prodigy.com>
Harald Hanche-Olsen wrote:
> + Michael Fleming <·····@pacbell.net>:
> 
> | I want to enter the mathematical notation as it exists in the math
> | books directly, exactly as it exists there, and not go through any
> | translation phases. That is, math notation straight into my program
> | and to the compiler.
> | 
> | Is anybody doing this? There are no technological hurdles that I can see.
> 
> One problem is that mathematics notation is ambiguous.  x� could mean
> the square of x, but it could also be the second component of a vector
> x.  The latter is common notation in differential geometry.  The
> Einstein summation notation may or may not be in force.  Examples of
> this nature abound.  (Mathematicians like to write sin 2x rather than
> sin(2x), for example.)  Mathematical texts usually have enough
> contextual information to disambiguate the notation; you would have to
> provide the same sort of information to your compiler.

Right, but only one notation is in force at a time. My (dream) system 
would have several modes and would be extensible.

> Some computer algebra systems have already started down this path.
> Maple lets you enter formulas as they are written, though I haven't
> gotten used to it and much prefer the linear notation, if only because
> it avoids reaching for the mouse umpteen times to write a simple
> formula.

The mouse is a terrible invention. It causes crippling injuries for one 
thing. I'm not talking about pawing around a mouse! It belongs in the 
computer museum along with the teletype and the qwerty keyboard.

The left hand selects the symbols with a chording, ergonomic keyboard. 
(The little finger and the ring finger would share a button, for 
instance.) The symbols that are available for selection change according 
to the context, and are visible on the screen, so there would be 
constant visual hints.

The right hand uses a wand on a graphics tablet for positioning the 
symbol chosen by the left. The wand has a button to place the symbol, 
and a way to size the symbols. The left hand stays on the pad, the right 
on the graphics tablet.

Entering mathematical ezpressions would be a fluid movement of the right 
hand with some button presses with the left.

All that assumes a right-handed user, of course. Reverse it for lefties.

That's the general idea. There are a lot more refinements.

Thanks for the Maple tip. I'll see if there is a demo version.

Michael


Michael
From: Bradley J Lucier
Subject: Re: modifying array access syntax
Date: 
Message-ID: <det039$r3u@arthur.cs.purdue.edu>
In article <·················@newssvr14.news.prodigy.com>,
Michael Fleming  <·····@pacbell.net> wrote:
>Harald Hanche-Olsen wrote:
>> + Michael Fleming <·····@pacbell.net>:
>> 
>> | I want to enter the mathematical notation as it exists in the math
>> | books directly, exactly as it exists there, and not go through any
>> | translation phases. That is, math notation straight into my program
>> | and to the compiler.
>> One problem is that mathematics notation is ambiguous. ...

In the mid-eighties I had a similar goal.  Syntax seemed very important to me
then---I wanted to write summations using capital sigma, not (reduce ...).

Now that I have more experience writing programs in several mathematical
problem domains, syntax does not seem so important.  It's hard enough getting
the semantics right, or, more precisely, choosing what seems to be the right
level of abstraction for a problem.

For example, here is some code (in Gambit Scheme) for the conjugate-gradient
approximation of sparse, symmetric, positive-definite linear systems:

;;; This routine uses the notation in Carl de Boor's notes on
;;; the preconditioned conjugate gradient method.

(define (conjugate-gradient A                ; an Operator, we solve Ax=b
                            b                ; a Vector
                            stop-iterations? ; a call-back function, see below
                            #!optional 
                            (preconditioner identity-operator))
  (let ((z (Operator-apply preconditioner b)))
    (let loop ((z z)                        ; (Operator-apply preconditioner r)
               (p z)                        ; search direction
               (r b)                        ; residual
               (x (Vector-zero b))          ; approximate solution
               (c (Vector-dot-product b z)) ; z \cdot r
               (m 0))                       ; number of iterations so far.
      (declare (flonum))
      (if (stop-iterations? A preconditioner z p r x c m)
          x
          (let* ((Ap (Operator-apply A p))
                 (alpha (/ c (Vector-dot-product Ap p))))
            (let ((x (Vector-*axpy alpha p x))
                  (r (Vector-*axpy (- alpha) Ap r)))
              (let* ((z (Operator-apply preconditioner r))
                     (d (Vector-dot-product r z))
                     (p (Vector-*axpy (/ d c) p z)))
                (loop z p r x d (FIX (+ m 1))))))))))

I find two things interesting about this code.

The first is that if you have a CLOS-style object system and Vector-zero,
Vector-dot-product, Vector-*axpy, and Operator-apply are generics, then
this code works at the level of linear operators and vector spaces, not
just at the level of matrices and $\Bbb R^n$.  This is nice if you're
working with finite element spaces, etc.

Second, the code is a literal translation of the algorithm in the notes.

The first property is the important thing; the fact that you have to
translate the notation of the algorithm to the notation of the code
in order to program it seems to be a small thing to do once you get
the semantics right.

Brad
From: Michael Fleming
Subject: Re: modifying array access syntax
Date: 
Message-ID: <nLKQe.284$lU2.37@newssvr11.news.prodigy.com>
Bradley J Lucier wrote:
> In article <·················@newssvr14.news.prodigy.com>,
> Michael Fleming  <·····@pacbell.net> wrote:
> 
>>Harald Hanche-Olsen wrote:
>>
>>>+ Michael Fleming <·····@pacbell.net>:
>>>
>>>| I want to enter the mathematical notation as it exists in the math
>>>| books directly, exactly as it exists there, and not go through any
>>>| translation phases. That is, math notation straight into my program
>>>| and to the compiler.
>>>One problem is that mathematics notation is ambiguous. ...
> 
> 
> In the mid-eighties I had a similar goal.  Syntax seemed very important to me
> then---I wanted to write summations using capital sigma, not (reduce ...).
> 
> Now that I have more experience writing programs in several mathematical
> problem domains, syntax does not seem so important.  It's hard enough getting
> the semantics right, or, more precisely, choosing what seems to be the right
> level of abstraction for a problem.
> 
> For example, here is some code (in Gambit Scheme) for the conjugate-gradient
> approximation of sparse, symmetric, positive-definite linear systems:
> 
> ;;; This routine uses the notation in Carl de Boor's notes on
> ;;; the preconditioned conjugate gradient method.
> 
> (define (conjugate-gradient A                ; an Operator, we solve Ax=b
>                             b                ; a Vector
>                             stop-iterations? ; a call-back function, see below
>                             #!optional 
>                             (preconditioner identity-operator))
>   (let ((z (Operator-apply preconditioner b)))
>     (let loop ((z z)                        ; (Operator-apply preconditioner r)
>                (p z)                        ; search direction
>                (r b)                        ; residual
>                (x (Vector-zero b))          ; approximate solution
>                (c (Vector-dot-product b z)) ; z \cdot r
>                (m 0))                       ; number of iterations so far.
>       (declare (flonum))
>       (if (stop-iterations? A preconditioner z p r x c m)
>           x
>           (let* ((Ap (Operator-apply A p))
>                  (alpha (/ c (Vector-dot-product Ap p))))
>             (let ((x (Vector-*axpy alpha p x))
>                   (r (Vector-*axpy (- alpha) Ap r)))
>               (let* ((z (Operator-apply preconditioner r))
>                      (d (Vector-dot-product r z))
>                      (p (Vector-*axpy (/ d c) p z)))
>                 (loop z p r x d (FIX (+ m 1))))))))))
> 
Very beautiful code. I'm sure you'll agree that the prefix notation does 
not inhibit comprehensibility, if you are accustomed to it. And it 
doesn't take long to become accustomed to prefix.

> I find two things interesting about this code.
> 
> The first is that if you have a CLOS-style object system and Vector-zero,
> Vector-dot-product, Vector-*axpy, and Operator-apply are generics, then
> this code works at the level of linear operators and vector spaces, not
> just at the level of matrices and $\Bbb R^n$.  This is nice if you're
> working with finite element spaces, etc.
> 
> Second, the code is a literal translation of the algorithm in the notes.

It also doesn't have any square roots or summation signs or long 
expressions that can be harder to grasp when written out linearly 
(prefix or infix or postfix, for that matter).

> The first property is the important thing; the fact that you have to
> translate the notation of the algorithm to the notation of the code
> in order to program it seems to be a small thing to do once you get
> the semantics right.

Well, that's certainly true. The raw mathematical expression could have 
bad numerical qualities, for instance. So scanning an expression 
straight from a book could be a disaster. Rockets could crash.

But is there any disadvantage to be able to enter a square root sign 
with the expression to take the square root of written under the bar 
directly into the code. Except for the general awkwardness of the input 
devices we have that make it cumbersome to do so, I can't think of any.

Your code would look and work exactly the same except for some 
attractively type set math expressions.

A math expression is looked at whole, and demonstrates all sorts of 
relationship which have to be read out of the code.

But maybe the gain of including typeset math in programs is not worth 
the effort.

Thanks for your reply and the code.

Michael Fleming




> Brad
From: ······@earthlink.net
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125499039.885796.230360@g47g2000cwa.googlegroups.com>
Michael Fleming wrote:

> But is there any disadvantage to be able to enter a square root sign
> with the expression to take the square root of written under the bar
> directly into the code. Except for the general awkwardness of the input
> devices we have that make it cumbersome to do so, I can't think of any.

No one is stopping you from writing a pretty printer that does the
appropriate transform on (sqrt {expression}) and the rest of the
arithmetics.  The input system isn't all that hard either.  Unless your
keyboard has a sqrt key, you'll have to do something odd, but folks
learn emacs commands and use drop downs, so that's no big deal IF
there's a benefit, which there isn't.

> Your code would look and work exactly the same except for some
> attractively type set math expressions.

And the benefit of a different presentation for 1% of my code is?

I'd be surprised if there are any non-trivial applications where more
than half of the code is math.  I suspect that the most math intensive
applications do very little math but make heavy use of GUI libraries.
This shouldn't be all that surprising - even math publications contain
a lot of non-math.

> A math expression is looked at whole, and demonstrates all sorts of
> relationship which have to be read out of the code.

A lisp expression is looked at as a whole and demonstrates
relationships between all lisp expressions, not just math ones.

A different format for math will make it harder to see relationships
that might exist between the 1% that is math and the other 99% of the
code.

"Standard" math notation is hieroglyphics for a very small subset of
things that one might want to talk/code about.  It isn't even all that
good within its domain.  (You can't even use it directly for
manipulating math statements.)

"Standard" math notation just isn't that useful outside of math
publications.

-andy
From: Michael Fleming
Subject: Re: modifying array access syntax
Date: 
Message-ID: <2rlRe.934$sF6.580@newssvr24.news.prodigy.net>
······@earthlink.net wrote:
> Michael Fleming wrote:
> 
> 
>>But is there any disadvantage to be able to enter a square root sign
>>with the expression to take the square root of written under the bar
>>directly into the code. Except for the general awkwardness of the input
>>devices we have that make it cumbersome to do so, I can't think of any.
> 
> 
> No one is stopping you from writing a pretty printer that does the
> appropriate transform on (sqrt {expression}) and the rest of the
> arithmetics.  The input system isn't all that hard either.  Unless your
> keyboard has a sqrt key, you'll have to do something odd, but folks
> learn emacs commands and use drop downs, so that's no big deal IF
> there's a benefit, which there isn't.

I guess that answers my question about why no one is doing this: No one 
cares!
> 
>>Your code would look and work exactly the same except for some
>>attractively type set math expressions.
> 
> 
> And the benefit of a different presentation for 1% of my code is?
> 
> I'd be surprised if there are any non-trivial applications where more
> than half of the code is math.  I suspect that the most math intensive
> applications do very little math but make heavy use of GUI libraries.
> This shouldn't be all that surprising - even math publications contain
> a lot of non-math.
>
I wouldn't say that most math intensive applications do very little 
math, though. That doesn't sound right to me at all. There are a lot of 
math intensive applications that have a very minimal UI.

> 
>>A math expression is looked at whole, and demonstrates all sorts of
>>relationship which have to be read out of the code.
> 
> 
> A lisp expression is looked at as a whole and demonstrates
> relationships between all lisp expressions, not just math ones.
> 
> A different format for math will make it harder to see relationships
> that might exist between the 1% that is math and the other 99% of the
> code.
> 
> "Standard" math notation is hieroglyphics for a very small subset of
> things that one might want to talk/code about.  It isn't even all that
> good within its domain.  (You can't even use it directly for
> manipulating math statements.)
> 
> "Standard" math notation just isn't that useful outside of math
> publications.

Well, I do have a much bigger idea than just math notation I'm toying 
with. But a demo project is the only way to show what I'm talking about.
I do want to bring back heiroglyphics, by the way.

It has been strange being on this side of the Lisp notation vs. 
(so-called) infix notation dispute. It's like a taste of my own 
medicine. Tastes good!

Thanks.

Michael

> -andy
> 
From: ······@earthlink.net
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125584434.697202.282560@z14g2000cwz.googlegroups.com>
Michael Fleming wrote:
> ······@earthlink.net wrote:
> > I'd be surprised if there are any non-trivial applications where more
> > than half of the code is math.  I suspect that the most math intensive
> > applications do very little math but make heavy use of GUI libraries.
> > This shouldn't be all that surprising - even math publications contain
> > a lot of non-math.
> >
> I wouldn't say that most math intensive applications do very little
> math, though. That doesn't sound right to me at all. There are a lot of
> math intensive applications that have a very minimal UI.

I was unclear.  I'm using "math intensive" to refer to applications
where the fraction of code that is math is relatively large, whether or
not their total amount of math code is absolutely large.

I'm asserting that applications that do a lot of math, especially
sophisticated stuff, require lots of "not math" support code, so if you
want to have a high percentage of math code, you're restricted to
applications whose math doesn't require much support code and then you
need to design the app so the other code is small too.  Small-code gui
wrappers around simple math is my guess for the apps that will rank
highly on this scale.

> I do want to bring back heiroglyphics, by the way.

Heiroglyphics don't scale and require more work if you've got much to
say.
From: Pascal Bourguignon
Subject: Re: modifying array access syntax
Date: 
Message-ID: <87fysozvbc.fsf@thalassa.informatimago.com>
······@earthlink.net writes:

> Michael Fleming wrote:
>> ······@earthlink.net wrote:
>> > I'd be surprised if there are any non-trivial applications where more
>> > than half of the code is math.  I suspect that the most math intensive
>> > applications do very little math but make heavy use of GUI libraries.
>> > This shouldn't be all that surprising - even math publications contain
>> > a lot of non-math.
>> >
>> I wouldn't say that most math intensive applications do very little
>> math, though. That doesn't sound right to me at all. There are a lot of
>> math intensive applications that have a very minimal UI.
>
> I was unclear.  I'm using "math intensive" to refer to applications
> where the fraction of code that is math is relatively large, whether or
> not their total amount of math code is absolutely large.

Oh!  You mean like this:

(defun theorem (s)
  (th1 nil nil (cadr s) (caddr s)))


(defun th1 (a1 a2 a c)
  (cond ((null a) (th2 a1 a2 nil nil c))
        (t (or (member (car a) c)
               (cond ((atom (car a))
                      (th1 (cond ((member (car a) a1) a1)
                                 (t (cons (car a) a1))) a2 (cdr a) c))
                     (t (th1 a1 (cond ((member (car a) a2) a2)
                                      (t (cons (car a) a2))) (cdr a) c)))))))

;; ...


> I'm asserting that applications that do a lot of math, especially
> sophisticated stuff, require lots of "not math" support code, so if you
> want to have a high percentage of math code, you're restricted to
> applications whose math doesn't require much support code and then you
> need to design the app so the other code is small too.  Small-code gui
> wrappers around simple math is my guess for the apps that will rank
> highly on this scale.
>
>> I do want to bring back heiroglyphics, by the way.
>
> Heiroglyphics don't scale and require more work if you've got much to
> say.

But they look nice.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Nobody can fix the economy.  Nobody can be trusted with their finger
on the button.  Nobody's perfect.  VOTE FOR NOBODY.
From: Morten Alver
Subject: Re: modifying array access syntax
Date: 
Message-ID: <df16q6$825$1@orkan.itea.ntnu.no>
Michael Fleming wrote:
> Why are we still using teletypes at all? Why can't we enter the equation 
> into our code directly as the mathematician writes it? This would not be 
> hard, really. A new keyboard for the left hand with math symbols on it, 
> a graphics pad for positioning, and smart symbols that realize what 
> surrounds them. So if you move the pointer to the top right of an 
> integral sign, the program realizes you are entering the upper limit.

You may be aware of Fortress, which has "Mathematical notation" as one 
of its focus areas:

http://research.sun.com/projects/plrg/

(I don't know anything about this project or its status, just thought 
I'd mention it)


--
Morten
From: Pascal Costanza
Subject: Re: modifying array access syntax
Date: 
Message-ID: <3nbnj6Fphf6U1@individual.net>
············@gmail.com wrote:

> Let me say a few words about my design thoughts here, and maybe people
> can help me see a better direction.
[...]

> One issue is always how to access an element in an array of which the
> dimensionality is general.  For example, I may have an expression to
> average neighboring elements in a computational "i" direction.  Here's
> the fortran code for 1d, 2d, 3d:
> 
> 1D: 0.5*(a(i) +a(i+1))
> 2D: 0.5*(a(i,j) +a(i+1,j))
> 3D: 0.5*(a(i,j,k) +a(i+1,j,k))
> 
> What I need is a notation which says "shift the i index by one and
> default the rest to unshifted" which then expresses all three
> possibilities above.  CL's apply seems the way to approach this.
> 
> But big expressions with "apply" is really quite far from how you'd
> like to express yourself in a notation that is close to the domain
> here.  What I had in mind is something that resembles standard notation
> for the field in question, which is a subscripted notation.

There is a difference between having your concepts close to the problem 
domain and having your syntax close to the problem domain. In my 
opinion, and I can safely assume that most Lispers agree here, the 
concepts are much more important than the notation.

The "standard" notation has been developed to optimize usage on paper. 
That's why there are lots of special symbols and lots of subscripts and 
superscripts because it's much more efficient to write these things down 
with pencil and paper. On a computer, this doesn't buy you much. The 
important point here is that the notation is not in any way "natural", 
it's just optimized towards a specific kind of use, and people have just 
learned to accomodate themselves to that notation. People are very 
flexible in that regard, so they also shouldn't have a problem to adapt 
to different kinds of notations. Recall that for beginners in 
mathematics, the notations typically used in mathematics are by far not 
obvious.

So I think the basic tradeoff is as follows: If your interest is to 
develop the code for your own experiments, it's probably more effective 
to invest some time into learning the Lisp syntax in order to forget 
about syntactic issues altogether in the long run. If your job is to 
provide an input format that is acceptable by some clients of yours, 
consider either training them, or else writing a separate parser that 
translates a concrete syntax to Lisp s-expressions, and then continue 
from there. The Lisp reader macros are good for doing minor tweaks with 
your syntax, but if you want your syntax to look very different from 
s-expressions then this is probably not the right tool.

> Now you're probably screaming in horror at the infix notation in the
> subscript.

No. This idea is brought up regularly by newbies, and apparently this is 
a "tradition" that goes back to the very beginnings of Lisp. John 
McCarthy himself has suggested an alternative syntax for Lisp, called 
M-expressions that is much closer to mathematical notation than to 
s-expressions. It's also a "tradition" that regular users of Lisp don't 
care because such alternative syntaxes don't solve a real problem.

There is even a section on this issue in the paper about "The Evolution 
of Lisp" by Richard Gabriel and Guy Steele. See 
http://www.dreamsongs.com/Essays.html

> Well, tough.  That's how people write these equations on
> paper and that's how they should be able to write them in code.  If
> this means translating behind the scenes into prefixed lisp, so be it.
> 
> This is just scratching the surface, but there's a flavor.

There is an infix package somewhere on the net that you can download and 
use that probably already provides what you want.


Pascal

-- 
OOPSLA'05 tutorial on generic functions & the CLOS Metaobject Protocol
++++ see http://p-cos.net/oopsla05-tutorial.html for more details ++++
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125189345.158458.213120@f14g2000cwb.googlegroups.com>
Pascal Costanza wrote:
> There is a difference between having your concepts close to the problem
> domain and having your syntax close to the problem domain. In my
> opinion, and I can safely assume that most Lispers agree here, the
> concepts are much more important than the notation.
>
> The "standard" notation has been developed to optimize usage on paper.
> That's why there are lots of special symbols and lots of subscripts and
> superscripts because it's much more efficient to write these things down
> with pencil and paper. On a computer, this doesn't buy you much. The
> important point here is that the notation is not in any way "natural",
> it's just optimized towards a specific kind of use, and people have just
> learned to accomodate themselves to that notation. People are very
> flexible in that regard, so they also shouldn't have a problem to adapt
> to different kinds of notations. Recall that for beginners in
> mathematics, the notations typically used in mathematics are by far not
> obvious.

I don't agree.  The paper notation is optimized essentially without
limitation for concise expressiveness.  Mathematical notation on paper
is not arbitrary.  It became and stayed what it is for good reason.
The computer imposes limitations that paper and pencil do not, but I
think the closer you can get to the pencil and paper version the
better, within the framework the computer imposes on you (lines of ASCI
characters).

> The Lisp reader macros are good for doing minor tweaks with
> your syntax, but if you want your syntax to look very different from
> s-expressions then this is probably not the right tool.

Well, no, but nothing is the right tool.  It's a matter of which is the
least bad.

This is one of many issues of developing a domain specific language on
top of lisp.  I'm convinced that it should be at least de-prioritized
until I have certain other things figured out first.

Thanks,
Bob
From: Pascal Bourguignon
Subject: Re: modifying array access syntax
Date: 
Message-ID: <877je6ev87.fsf@thalassa.informatimago.com>
············@gmail.com writes:
> I don't agree.  The paper notation is optimized essentially without
> limitation for concise expressiveness.  Mathematical notation on paper
> is not arbitrary.  It became and stayed what it is for good reason.
> The computer imposes limitations that paper and pencil do not, but I
> think the closer you can get to the pencil and paper version the
> better, within the framework the computer imposes on you (lines of ASCI
> characters).

I note two things about paper mathematical notations:

- each "author" has his own.  There are some convetional notations
  often reused, but it's actually quite flexible.

- I see a tendency to actually remove notation.  For example, you
  start with vector noted as:

       ->                                                     ->
       AB    then it becomes more abstract and are noted as:  v

and finally the most sophisticated mathematicians just write v or x.

The same with addition, you start with: a + b + c,

                              n
then: x0 + x1 + ... Xn, then: Σ xi  and finally they just write: xi
                             i=0

eventually removing any syntactical idiosyncrasie.

To me, it looks very much like lisp where the syntax is deleted and
you just write the abstract tree.



-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
In deep sleep hear sound,
Cat vomit hairball somewhere.
Will find in morning.
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125194163.116494.133960@g43g2000cwa.googlegroups.com>
Pascal Bourguignon wrote:
> ············@gmail.com writes:
> > I don't agree.  The paper notation is optimized essentially without
> > limitation for concise expressiveness.  Mathematical notation on paper
> > is not arbitrary.  It became and stayed what it is for good reason.
> > The computer imposes limitations that paper and pencil do not, but I
> > think the closer you can get to the pencil and paper version the
> > better, within the framework the computer imposes on you (lines of ASCI
> > characters).
>
> I note two things about paper mathematical notations:
>
> - each "author" has his own.  There are some convetional notations
>   often reused, but it's actually quite flexible.
>
> - I see a tendency to actually remove notation.  For example, you
>   start with vector noted as:
>
>        ->                                                     ->
>        AB    then it becomes more abstract and are noted as:  v
>
> and finally the most sophisticated mathematicians just write v or x.
>
> The same with addition, you start with: a + b + c,
>
>                               n
> then: x0 + x1 + ... Xn, then: S xi  and finally they just write: xi
>                              i=0
>
> eventually removing any syntactical idiosyncrasie.
>
> To me, it looks very much like lisp where the syntax is deleted and
> you just write the abstract tree.

Hmm.  You just arrived at my notation (less multi-dimensionality).
Coincidence?

(Actually, as has been pointed out, x_i is not a sum.  It is a vector.
But anyway...)

You had me until "very much like lisp".  Something like the Einstein
notation is a great example of a standard way of expressing a great
deal of information with a small amount of precise syntax.  That's what
I want to add on top of lisp for a domain specific language.

Thanks,
Bob
From: Nathan Baum
Subject: Re: modifying array access syntax
Date: 
Message-ID: <derb99$h3n$1@news6.svr.pol.co.uk>
············@gmail.com wrote:
> Pascal Bourguignon wrote:
> 
>>············@gmail.com writes:
>>
>>>I don't agree.  The paper notation is optimized essentially without
>>>limitation for concise expressiveness.  Mathematical notation on paper
>>>is not arbitrary.  It became and stayed what it is for good reason.
>>>The computer imposes limitations that paper and pencil do not, but I
>>>think the closer you can get to the pencil and paper version the
>>>better, within the framework the computer imposes on you (lines of ASCI
>>>characters).
>>
>>I note two things about paper mathematical notations:
>>
>>- each "author" has his own.  There are some convetional notations
>>  often reused, but it's actually quite flexible.
>>
>>- I see a tendency to actually remove notation.  For example, you
>>  start with vector noted as:
>>
>>       ->                                                     ->
>>       AB    then it becomes more abstract and are noted as:  v
>>
>>and finally the most sophisticated mathematicians just write v or x.
>>
>>The same with addition, you start with: a + b + c,
>>
>>                              n
>>then: x0 + x1 + ... Xn, then: S xi  and finally they just write: xi
>>                             i=0
>>
>>eventually removing any syntactical idiosyncrasie.
>>
>>To me, it looks very much like lisp where the syntax is deleted and
>>you just write the abstract tree.
> 
> Hmm.  You just arrived at my notation (less multi-dimensionality).
> Coincidence?
> 
> (Actually, as has been pointed out, x_i is not a sum.  It is a vector.
> But anyway...)
> 
> You had me until "very much like lisp".  Something like the Einstein
> notation is a great example of a standard way of expressing a great
> deal of information with a small amount of precise syntax.  That's what
> I want to add on top of lisp for a domain specific language.

Well, no. Einstein Summation is a great example of an ad hoc way of 
expressing a great deal of information with a small amount of precise 
syntax and a not inconsiderable amount of contextual knowledge.

Take your original example:

   a_i,j,k

That would translate to

   a
    i j k

in normal notation, but does that mean

   a
    i j k

or
     /        \
   E | a      |
   i \  i j k /

?

Is

           2
   g(x) = x

an equality or a function?

Presumably this wouldn't be a problem in programming, but is

    *
   a

the adjoint of a, or an indication that there's a footnote or endnote 
about 'a'?
From: Nathan Baum
Subject: Re: modifying array access syntax
Date: 
Message-ID: <derbbf$h4r$1@news6.svr.pol.co.uk>
············@gmail.com wrote:
> Pascal Bourguignon wrote:
> 
>>············@gmail.com writes:
>>
>>>I don't agree.  The paper notation is optimized essentially without
>>>limitation for concise expressiveness.  Mathematical notation on paper
>>>is not arbitrary.  It became and stayed what it is for good reason.
>>>The computer imposes limitations that paper and pencil do not, but I
>>>think the closer you can get to the pencil and paper version the
>>>better, within the framework the computer imposes on you (lines of ASCI
>>>characters).
>>
>>I note two things about paper mathematical notations:
>>
>>- each "author" has his own.  There are some convetional notations
>>  often reused, but it's actually quite flexible.
>>
>>- I see a tendency to actually remove notation.  For example, you
>>  start with vector noted as:
>>
>>       ->                                                     ->
>>       AB    then it becomes more abstract and are noted as:  v
>>
>>and finally the most sophisticated mathematicians just write v or x.
>>
>>The same with addition, you start with: a + b + c,
>>
>>                              n
>>then: x0 + x1 + ... Xn, then: S xi  and finally they just write: xi
>>                             i=0
>>
>>eventually removing any syntactical idiosyncrasie.
>>
>>To me, it looks very much like lisp where the syntax is deleted and
>>you just write the abstract tree.
> 
> Hmm.  You just arrived at my notation (less multi-dimensionality).
> Coincidence?
> 
> (Actually, as has been pointed out, x_i is not a sum.  It is a vector.
> But anyway...)
> 
> You had me until "very much like lisp".  Something like the Einstein
> notation is a great example of a standard way of expressing a great
> deal of information with a small amount of precise syntax.  That's what
> I want to add on top of lisp for a domain specific language.

Well, no. Einstein Summation is a great example of an ad hoc way of
expressing a great deal of information with a small amount of precise
syntax and a not inconsiderable amount of contextual knowledge.

Take your original example:

   a_i,j,k

That would translate to

   a
    i j k

in normal notation, but does that mean

   a
    i j k

or
     /        \
   E | a      |
   i \  i j k /

?

Is

           2
   g(x) = x

an equality or a function?

Presumably this wouldn't be a problem in programming, but is

    *
   a

the adjoint of a, or an indication that there's a footnote or endnote
about 'a'?
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125201937.994674.225460@g47g2000cwa.googlegroups.com>
Nathan Baum wrote:

> Well, no. Einstein Summation is a great example of an ad hoc way of
> expressing a great deal of information with a small amount of precise
> syntax and a not inconsiderable amount of contextual knowledge.

Your point being what?  I am writing a domain specific language.  The
whole point of a domain specific language is to make such assumptions
about context, because there are advantages to be gained by it.  The
Einstein notation is a good example of that.

Thanks,
Bob
From: Nathan Baum
Subject: Re: modifying array access syntax
Date: 
Message-ID: <deridq$6ds$1@newsg2.svr.pol.co.uk>
············@gmail.com wrote:
> Nathan Baum wrote:
> 
> 
>>Well, no. Einstein Summation is a great example of an ad hoc way of
>>expressing a great deal of information with a small amount of precise
>>syntax and a not inconsiderable amount of contextual knowledge.
> 
> 
> Your point being what?

My point being:

* That it is not true that "The paper notation is optimized essentially 
without limitation for concise expressiveness," because the increase in 
conciseness within an equation is opposed by a decrease in conciseness 
where the equation is explained. Eventually, the decrease in the 
explanation's conciseness will be greater than the increase in the 
equation's conciseness, at which point you will have reached the limit 
of paper notation's limitless optimisation.

* That is is not true that "Mathematical notation on paper is not 
arbitrary," because, for complex math, one needs to come up with 
arbitrary short forms of longer expressions in order to fit an equation 
on one page.

* That it is not true that "It became and stayed what it is for good 
reason," because it did not and will not "stay what it is". Any given 
notation will always be insufficient for some new equation somebody 
wants to write down.

> I am writing a domain specific language.  The whole point of a domain
> specific language is to make such assumptions about context, because
> there are advantages to be gained by it.  The Einstein notation is a
> good example of that.

Sure. But it's also a good example of the fact that paper notation 
possesses neither 'unlimited conciseness', standardisation, nor 
immutability, all of which you claimed it had. You may have made the 
claim in error, but it was made nonetheless and it was that to which I 
was responding.
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125213604.320505.195260@g49g2000cwa.googlegroups.com>
Nathan Baum wrote:
> ············@gmail.com wrote:
> > Nathan Baum wrote:
> >
> >
> >>Well, no. Einstein Summation is a great example of an ad hoc way of
> >>expressing a great deal of information with a small amount of precise
> >>syntax and a not inconsiderable amount of contextual knowledge.
> >
> >
> > Your point being what?
>
> My point being:
>
> * That it is not true that "The paper notation is optimized essentially
> without limitation for concise expressiveness,"

Of course it is true. If I can imagine it I can write it.  There is no
limitation like the ASCI character set, or writing on one line, or
making things easy for a computer to parse.

> Eventually, the decrease in the
> explanation's conciseness will be greater than the increase in the
> equation's conciseness, at which point you will have reached the limit
> of paper notation's limitless optimisation.

What the heck are you talking about?  A notation has conventions.  It's
true that these conventions hold information that can be used to
"expand" the notation.  So what?  That's the point of well-designed
conventions, i.e., notation.

> * That is is not true that "Mathematical notation on paper is not
> arbitrary,"

Of course it is true.

> because, for complex math, one needs to come up with
> arbitrary short forms of longer expressions in order to fit an equation
> on one page.

Sure.  So what?  How does that have the slightest bearing on the idea
that the choice of notation is not arbitrary?  Are you saying that
Einstein's notation is arbitrary?  Of course it isn't.  It's the right
choice for certain kinds of expression.  It is optimized for that
application, and its form is not arbitrary because of that application.
 So I can use a subexpression and choose a letter for it.  That doesn't
make the notation itself arbitrary.  Geez.

> * That it is not true that "It became and stayed what it is for good
> reason," because it did not and will not "stay what it is". Any given
> notation will always be insufficient for some new equation somebody
> wants to write down.

Go take a textbook on fluid dynamics off the shelf from the late 19th
century.  Then go take one off the shelf from this decade.  You will
find that many notations have persisted for over 100 years.  This
notation became and stayed what it is for good reason.  Of course that
is true.

> > I am writing a domain specific language.  The whole point of a domain
> > specific language is to make such assumptions about context, because
> > there are advantages to be gained by it.  The Einstein notation is a
> > good example of that.
>
> Sure. But it's also a good example of the fact that paper notation
> possesses neither 'unlimited conciseness',

I never claimed anything as ridiculous as that.  You invented it.

> standardisation,

Well that's simply false.  Ask any fluid dynamicist in the entire
country what the symbol "rho" stands for.  They will instantly tell you
it means density.  That's a standard notation.

Einstein's notation is another example.  It is standard and can be
found in thousands and thousands of texts without explanation beyond a
possible "we are using Einstein's notation."

> immutability

Again, take some textbooks from the shelf from 100 years ago and you
will find notations still in use today.  The good notations survive
because they are useful.  They are not arbitrary, but carefully
designed and optimized for their application over time.

> , all of which you claimed it had. You may have made the
> claim in error,

Baloney.  You're just a nit arguing for argument's sake.  When I say
something is "not arbitrary", I don't mean that there aren't any
degrees of freedom left in the system.  That's just a silly contrary
way to read what I wrote.

Thanks,
Bob
From: Pascal Bourguignon
Subject: Re: modifying array access syntax
Date: 
Message-ID: <87psrydel1.fsf@thalassa.informatimago.com>
···@zedat.fu-berlin.de (Stefan Ram) writes:

> Pascal Bourguignon <····@mouse-potato.com> writes:
>>                              n
>>then: x0 + x1 + ... Xn, then: Σ xi  and finally they just write: xi
>>                             i=0
>
>   I have never seen just  xi, but I remember Einstein's sum convention
>   to sum when the same upper and lower index appears, such as in aᵢxⁱ.
>
>   (Actually, I can not see what I just wrote, due to the lack of an
>   appropriate Unicode font, so I hope that I have entered the correct
>   codes.)

It's easy to expand Einstein's notation.  Actually, I used (xi) when
it was alone for indeed it would be ambiguous.

-- 
"Klingon function calls do not have "parameters" -- they have
"arguments" and they ALWAYS WIN THEM."
From: Christopher C. Stacy
Subject: Re: modifying array access syntax
Date: 
Message-ID: <u4q9abo5c.fsf@news.dtpq.com>
············@gmail.com writes:

> Pascal Costanza wrote:
> > The "standard" notation has been developed to optimize usage on paper.
> 
> I don't agree.  The paper notation is optimized essentially without
> limitation for concise expressiveness.  Mathematical notation on paper
> is not arbitrary.  It became and stayed what it is for good reason.
> The computer imposes limitations that paper and pencil do not, but I
> think the closer you can get to the pencil and paper version the
> better, within the framework the computer imposes on you (lines of ASCI
> characters).

Sometimes it's better to take nothing,
rather than to take just part of something.
From: Robert Maas, see http://tinyurl.com/uh3t
Subject: Limitations of computer math motation? (was: modifying array access syntax)
Date: 
Message-ID: <REM-2005sep16-003@Yahoo.Com>
> From: ············@gmail.com
> The paper notation is optimized essentially without limitation for
> concise expressiveness.  Mathematical notation on paper is not
> arbitrary.  It became and stayed what it is for good reason.

But note that in mathematics nothing is dependent on time. If you prove
something is true today, it'll still be true tomorrow. The notation is
optimized for expressing such static for-all-time truths and
implications. This is different from computer programming where you may
assign a value to a variable today but a millisecond later you may
assign a new value to that same variable, and any truth about that
variable you discovered or proved during that brief interval might no
longer be valid. There's some commonality between math concepts and
computer-programming concepts, but some major differences too. What
notation is optimum for one domain isn't necessarily optimum for the
other.

> The computer imposes limitations that paper and pencil do not,

No, it doesn't. Some current computer-implemented methods of storage
and presentation of data *do* impose limitations, such as ASCII which
has only certain characters, and linear strings of Unicode which have
lots of characters but only limited (linear) relationships between
those characters. But WYSIWYG document editors, together with their
underlying data structures containing Unicode together with various
kinds of links and tags, have ways of getting around some of these
limitations. There's no reason a super-good math-WYSISYG editor
couldn't fully implement all professionally-publishable math, the same
as paper and pencil, leaving only ill-defined new notations for
paper-and-pencil-only. And there could be an escape mode in such an
editor whereby anyone can free-hand such ill-defined new notations
also. A human could visually process both well-defined structures and
ill-defined hand-crafted diagrams in such a document, while
proof-checking software could process only the well-defined structures
and leave the ill-defined stuff as "need human intervention here".

You might look at MRPP3 (a.k.a. "POX") as a start for the visual aspect
of such an editor, of nested "overlays" on a bitmapped device, whereby
low-level expressions given as overlays are arranged in various
standard geometrical patterns (vertical stack, horizontal flow,
diagonal superscript or subscript, bracket or fraction bar or
square-root notation etc. just the right size to fit around inner
object), and Gosper's POX macros and/or MRPP4 as a start for the
relating Lisp math motation (such as used internally in Utah/Hearn's
"REDUCE" program and in MIT's MacSyma program) to such nested overlays.
Perhaps you can design an underlying data representation that is better
than MRPP3 used but basically the same idea, but perhaps using Knuth's
"TeX" kerning ideas to make output slightly prettier, although that
might not be necessary in the first version. Then design a nice UI that
works equally well as a GUI window application and as a HTML-accessible
server-side application, and proceed to implement it. Once you have a
nice way of interactively editing documents containing mathematical
structures, whereby the same document can be edited in either GUI mode
or HTML mode, what you see is the rendering in GUI or HTML but what you
are actually doing is editing a *math* document, you can then add
elements of programming, whereby you now have not just a math document
but a math program, containing complex math data as literal constants
throughout the program, and when you run it your main output as well as
your debug output is also in pretty math (on GUI) or semi-pretty math
(via HTML), and you can at any time enter test data by directly editing
in (semi-)pretty-math mode. No need to say (setq y '(sin (/ 1 x))),
instead you can create that internal code by manually editing something
that looks like this as you edit it in USASCII/HTML mode:
              1
  y  <-  sin ---
              x
and when viewing the same internal statement in GUI mode the <- is an
actual left-assignment-arrow and all the fonts are prettier and
appropriate and the fraction bar isn't a bunch of hyphens but is a
single nice bar.
From: Christophe Rhodes
Subject: Re: modifying array access syntax
Date: 
Message-ID: <sqslwvjjs0.fsf@cam.ac.uk>
Pascal Costanza <··@p-cos.net> writes:

> There is a difference between having your concepts close to the problem 
> domain and having your syntax close to the problem domain. In my 
> opinion, and I can safely assume that most Lispers agree here, the 
> concepts are much more important than the notation.

Let me put in a slight vote for the other side: when implementing
numerical or related code in Lisp, I do often attempt to approach the
notation of papers.  One example might be the Hidden Markov Model code
I posted a reference to here recently[1], where I use greek characters
and have a fairly close correspondence between the notation in
Rabiner's tutorial and the notation in the code.  This greatly
enhances maintainability: my own (subsequent) notes following the
notation in Rabiner can easily be expressed as code.  So don't dismiss
the importance of similarity of notation too glibly, though I agree
that a newcomer to the language shouldn't attempt to carry baggage
over.

Christophe

[1] <http://www-jcsu.jesus.cam.ac.uk/~csr21/hmm.lisp> in case anyone
missed it.
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125189599.687396.235660@f14g2000cwb.googlegroups.com>
Christophe Rhodes wrote:
> Let me put in a slight vote for the other side: when implementing
> numerical or related code in Lisp, I do often attempt to approach the
> notation of papers.  One example might be the Hidden Markov Model code
> I posted a reference to here recently[1], where I use greek characters
> and have a fairly close correspondence between the notation in
> Rabiner's tutorial and the notation in the code.  This greatly
> enhances maintainability: my own (subsequent) notes following the
> notation in Rabiner can easily be expressed as code.  So don't dismiss
> the importance of similarity of notation too glibly, though I agree
> that a newcomer to the language shouldn't attempt to carry baggage
> over.
>
> Christophe
>
> [1] <http://www-jcsu.jesus.cam.ac.uk/~csr21/hmm.lisp> in case anyone
> missed it.

Christophe,

Quite interesting.  How do you generate these greek characters in an
editor like emacs?  This is another area where the traditional computer
programming equivalent is far worse than paper and pencil notation.
Writing out greek characters in an equation destroys their
expressiveness, IMO.

Thanks,
Bob
From: Pascal Bourguignon
Subject: Re: modifying array access syntax
Date: 
Message-ID: <87u0hadepg.fsf@thalassa.informatimago.com>
············@gmail.com writes:
>> [1] <http://www-jcsu.jesus.cam.ac.uk/~csr21/hmm.lisp> in case anyone
>> missed it.
>
> Quite interesting.  How do you generate these greek characters in an
> editor like emacs? 

M-: (set-input-method 'greek) RET
M-x toggle-input-method RET

But I prefer to type SIGMA and let emacs display Σ automatically, to
keep my sources in ASCII, therefore compatible with any
implementation.



(defconst *greek-letters*  
  '( "alpha" "beta" "gamma" "delta" "epsilon" "zeta" "eta"
     "theta" "iota" "kappa" "lambda" "mu" "nu" "xi" "omicron" "pi" 
     "rho"  "terminalsigma" "sigma" "tau"
     "upsilon" "phi" "chi" "psi" "omega" )
  "The order of these strings is fixed by the encoding of greek-iso8859-7!")


(defun greek-letter-font-lock ()
  "
RETURN: A font-lock-keywords list mapping greek letter names 
        to greek characters.
"
  (when (<= 21 emacs-major-version)
    (let ((maj 64) (min 96))
      (mapcan 
       (lambda (letter) 
         (incf maj) (incf min)
         `(
           (,(format +letter-regexp-format+ (upcase letter))
            (1 (progn (compose-region (match-beginning 1) (match-end 1)
                                      ,(make-char 'greek-iso8859-7 maj)
                                      'decompose-region)
                      nil)))
           (,(format +letter-regexp-format+ (downcase letter))
            (1 (progn (compose-region (match-beginning 1) (match-end 1)
                                      ,(make-char 'greek-iso8859-7 min)
                                      'decompose-region)
                      nil)))))
       *greek-letters*))))


(defun tree-upcase-strings (tree)
  (cond
   ((stringp tree) (STRING-UPCASE tree))
   ((consp tree) (cons (tree-upcase-strings (car tree))
                       (tree-upcase-strings (cdr tree))))
   (t tree)))


(defvar pretty-greek t)
(defvar *greek-flk* '())

(defun pretty-greek ()
  "
Show LAMBDA keyword as a greek letter lambda in lisp source code.
 (add-hook 'emacs-lisp-mode-hook 'pretty-greek)
 (add-hook 'lisp-mode-hook       'pretty-greek)
"
  (interactive)
  (unless (and (boundp 'pretty-greek) (not pretty-greek))
    (setf font-lock-keywords-case-fold-search nil)
    (setf *greek-flk*
          (sort (append (greek-letter-font-lock) (apl-letter-font-lock))
                (lambda (a b) (> (length (car a)) (length (car b))))))
    (font-lock-add-keywords nil *greek-flk*)))


(defun cancel-pretty-greek ()
  (interactive)
  (font-lock-remove-keywords nil *greek-flk*))



> This is another area where the traditional computer
> programming equivalent is far worse than paper and pencil notation.
> Writing out greek characters in an equation destroys their
> expressiveness, IMO.

There's a reason why one-letter identifiers are frowned upon in
programs.  Greek letters included!

Note that mathematician start with a long list of definition:

let α be the angle between vector U and vector V
let ρ be the module of the vector U.
                       n
let's use the notation Σ  x   to mean:  x  + x   + ... x
                      i=p  i             p    p+1        n

let's use the notation '...' to mean ...


Some notations are usual, like '...', some a often used, like Σ, but
most of them are specific to each theorem or each formula.  Actually,
mathematician  have to write the same formulas over and over, with
slight modifications, to express their theorems.  We programmer just
write the formulas once.  It would not be more readable to write:

(defun acceleration (mass distance)
  (let* ((g +gravitation-constant+)
         (M1 *planet-mass*)
         (M2 mass)
         (d distance)
         (a (/ (* g m1 m2) d d)))
    a))

than:

(defun acceleration (mass distance)
   (/ (* +gravitation-constant+ *planet-mass* mass) distance distance))    

But of course, if you want to proove a theorem, you'll have to rewrite
the formula a lot and it'll be worthwhile to write:



let G be the gravitation constant
let M be the mass of the planet
let m be the mass of the point
let d be the distance between the point and the center of mass of the planet.

                   M . m
            a = G --------
                     d²

                   M . m
    <=>    d² = G --------
                     a

                     ----------
                    /    M . m
    <=>    d  = \  / G --------
                 \/        a

rather than copy the longer names over and over.


Different problem, different solution!


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Until real software engineering is developed, the next best practice
is to develop with a dynamic system that has extreme late binding in
all aspects. The first system to really do this in an important way
is Lisp. -- Alan Kay
From: ············@gmail.com
Subject: Re: modifying array access syntax
Date: 
Message-ID: <1125195017.943536.214240@g47g2000cwa.googlegroups.com>
Pascal Bourguignon wrote:

[snipped code]

Thanks for that.

> > This is another area where the traditional computer
> > programming equivalent is far worse than paper and pencil notation.
> > Writing out greek characters in an equation destroys their
> > expressiveness, IMO.
>
> There's a reason why one-letter identifiers are frowned upon in
> programs.  Greek letters included!

The only cogent one I'm aware of is "you can't search/grep for them,"
an argument which hardly holds for greek letters, unless you do
something really bizarre like start spelling things in greek.

> Note that mathematician start with a long list of definition:
>
> let α be the angle between vector U and vector V
> let ρ be the module of the vector U.
>                        n
> let's use the notation Σ  x   to mean:  x  + x   + ... x
>                       i=p  i             p    p+1        n
>
> let's use the notation '...' to mean ...

Sure, and that's what comments are for in a program.

> Some notations are usual, like '...', some a often used, like Σ, but
> most of them are specific to each theorem or each formula.

Not in my field.  There at least hundreds (if not thousands) of symbols
that have fairly well-defined conventions on what they mean.  And even
if many of them are overloaded, it is almost always clearly from
context.

  Actually,
> mathematician  have to write the same formulas over and over, with
> slight modifications, to express their theorems.

And in the codes I write I have hundreds or thousands of equations,
which use the same symbols over and over again.

> write the formulas once.  It would not be more readable to write:
>
> (defun acceleration (mass distance)
>   (let* ((g +gravitation-constant+)
>          (M1 *planet-mass*)
>          (M2 mass)
>          (d distance)
>          (a (/ (* g m1 m2) d d)))
>     a))
>
> than:
>
> (defun acceleration (mass distance)
>    (/ (* +gravitation-constant+ *planet-mass* mass) distance distance))

No, but that's a toy example.  If I had 50 different interaction force
models, I would certainly prefer to express them all using M1 and M2
rather than planet-mass and mass.  A mathematician has an eye for
equation form in traditional notation that gets lost when you start
spelling everything out longhand with long, descriptive identifiers.
Long, descriptive identifiers are good in some contexts.  In
mathematical equations, they are terrible.

>
> But of course, if you want to proove a theorem, you'll have to rewrite
> the formula a lot and it'll be worthwhile to write:

Same reason it makes sense in the codes I write.

> Different problem, different solution!

Well, your "problem" may be different, but it isn't the real problem
(or at least, it isn't my problem.)

Thanks,
Bob
From: Christophe Rhodes
Subject: Re: modifying array access syntax
Date: 
Message-ID: <sq7je6jv9y.fsf@cam.ac.uk>
············@gmail.com writes:

> Quite interesting.  How do you generate these greek characters in an
> editor like emacs?  This is another area where the traditional computer
> programming equivalent is far worse than paper and pencil notation.
> Writing out greek characters in an equation destroys their
> expressiveness, IMO.

In emacs, I set the input mode to "TeX" (C-u C-\ TeX RET if muscle
memory doesn't fail me :-), which is a familiar input method to me
from writing the maths in the first place :-)

Christophe
From: Pascal Costanza
Subject: Re: modifying array access syntax
Date: 
Message-ID: <3ndl3bF118hoU1@individual.net>
Christophe Rhodes wrote:

> Let me put in a slight vote for the other side: when implementing
> numerical or related code in Lisp, I do often attempt to approach the
> notation of papers.  One example might be the Hidden Markov Model code
> I posted a reference to here recently[1], where I use greek characters
> and have a fairly close correspondence between the notation in
> Rabiner's tutorial and the notation in the code.  This greatly
> enhances maintainability: my own (subsequent) notes following the
> notation in Rabiner can easily be expressed as code.  So don't dismiss
> the importance of similarity of notation too glibly, though I agree
> that a newcomer to the language shouldn't attempt to carry baggage
> over.

...but using names that are close to the problem domain is a somewhat 
different issue than rearranging the syntax grammar, right?


Pascal

-- 
OOPSLA'05 tutorial on generic functions & the CLOS Metaobject Protocol
++++ see http://p-cos.net/oopsla05-tutorial.html for more details ++++
From: Christophe Rhodes
Subject: Re: modifying array access syntax
Date: 
Message-ID: <sq3boujnow.fsf@cam.ac.uk>
Pascal Costanza <··@p-cos.net> writes:

> ...but using names that are close to the problem domain is a
> somewhat different issue than rearranging the syntax grammar, right?

I think it's a difference of degree, and that it's reasonable for
certain applications in certain contexts to want to write (and perhaps
more importantly _read_) in a notation which more closely resembles
traditional mathematics than traditional Lisp.

Christophe
From: Wade Humeniuk
Subject: Re: modifying array access syntax
Date: 
Message-ID: <I23Qe.197866$9A2.143450@edtnps89>
············@gmail.com wrote:
> I do a lot of multi-dimensional array number crunching.  Setting aside
> possible performance drawbacks for a moment, Common Lisp appears to
> have things I've desired for a long time, in particular the macro
> capability, and multi-methods secondarily.  Lisp is advertised as "the
> extensible language" and I hope so, because the array access notation
> is IMO unacceptable in the native syntax.
> 

My honest advice is to just give it up and use the Lisp syntax.  If
you do not you will not become fluent in Lisp.  (Actually what will
happen is that you will fight for while and then give in anyways).
The human brain is quite capable of holding incompatible ways
of doing things, you just have to be exposed enough.  It just
like learning a second spoken language.  If you keep translating
from your orignal tongue you will not be a fluent speaker.

Wade
From: Kent M Pitman
Subject: Re: modifying array access syntax
Date: 
Message-ID: <uhdd9gbzw.fsf@nhplace.com>
Wade Humeniuk <··················@telus.net> writes:

> ············@gmail.com wrote:
> > I do a lot of multi-dimensional array number crunching.  Setting aside
> > possible performance drawbacks for a moment, Common Lisp appears to
> > have things I've desired for a long time, in particular the macro
> > capability, and multi-methods secondarily.  Lisp is advertised as "the
> > extensible language" and I hope so, because the array access notation
> > is IMO unacceptable in the native syntax.
> >
> 
> My honest advice is to just give it up and use the Lisp syntax.  If
> you do not you will not become fluent in Lisp.

I just wrote a longer post which amounted to wanting to make this
point.  Sometimes I wish I had others' gift of conciseness.  Well, I
hope there's value in both the long and short presentations, and that
I haven't wasted my time, but I did want to underscore that this was
my essential message.

> (Actually what will
> happen is that you will fight for while and then give in anyways).
> The human brain is quite capable of holding incompatible ways
> of doing things, you just have to be exposed enough.  It just
> like learning a second spoken language.  If you keep translating
> from your orignal tongue you will not be a fluent speaker.

Another good point.

There's some lingering meta-issue which relates to the question of
whether Lisp provides you the syntactic tools it does to allow you to
perfect what it does or to overcome what it does and rise above it.  I
guess every tool is double-edged, and Lisp tries hard not to cripple
you in making an intelligent choice between joining and fighting it.
It may make it tricky to write a[i,j] or a(i,j) but not out of a
desire too--rather out of a desire to make sure that it's other
choices are efficiently implemented.  IMO, even if it falls short of
making these notations easy, it falls less short of making them easy
than the languages that provide a[i,j] or a(i,j) or a[i][j] fall short
of making (aref a i j) easy.  So, by that "tolerance" metric, I 
personally  think it does quite ok.