From: Ulrich Hobelmann
Subject: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39a9ueF5spo2dU1@individual.net>
(Sorry for crossposting this.  If you specifically refer to one language 
family only, please post the answer only to one group.)

There are several things I (and others) don't really like about Lisp and 
Forth, but most live with it.  Today I set out to try a mix of both to 
see, how certain things improve.

IMHO the worst thing in Lisp is its many parentheses.  While they do a 
great job in structuring the language and allowing features such as 
variable numbers of arguments and macros, they turn every program into a 
ragged tree (while a Forth function typically is one or two lines).  I 
don't say this is terrible, but I thought I might try to improve on it.

The worst thing about Forth IMHO is the stack clutter.  When every 
function has some swaps, dups, overs and nips in it, it takes 
concentration to keep track of the stack in your mind.  Sure, there are 
stack comments, but when you write those, you might just as well use 
local variables instead.

The good thing about Lisp is its structure: there are no confusing 
compile and interpretation modes nor words like postpone, immediate, [, 
or ].

The good thing about Forth is simplicity: functions can be very short. 
If you want to chain several functions you just concatenate them in 
Forth: bla baz bar foo; in Lisp this looks like (foo (bar (baz bla))), 
which is reverse order.  And while I could write a macro like (with bla 
baz bar foo) in Lisp, I just wanted to try things out here.

Please note that this is just the product of a couple of hours of toying 
around, nothing more. Don't take it too serious.

First here's how some code could look in a simple Lisp (not Common 
Lisp).  The ret is the explicit return continuation. (I smell a ret!)
Macros are implicitly quasiquoted, somewhat like in Scheme.  That means 
that most of their definition is just inserted in place of the macro 
invocation.

The , is Lisp syntax to say that that part is not inserted verbatim, but 
instead the *value* of the unquoted variable is inserted.  ,@ does the 
same, but if it's unquoting a list, the elements of the list (not the 
list) is inserted.

(macro (when cond . body)
   (if cond ,body ()))

(macro (loop (i a b) . body)
   (let (,i a)
     (when (> ,i b)
       (label start)
       ,@body
       (set i (1+ i))
       (when (< ,i b) (goto start)))))

(var foo 5)
(fun (fac n ret)
   (if (<= n 1)
       (ret 1)
       (ret (* (fac (- n 1))
	      n))))

; alen = length; a! = store; ·@ = fetch
(fun (map fn array ret)
   (let (len (alen array) ar (newarray len))
     (loop (i 0 len)
       (a! ar i (fn (·@ array i))))
     (ret ar)))

; aprint = print array
(fun (test ret)
   (aprint (map (fn (n ret) (ret (+ n 2))) (array '(1 2 3 4)))))

My first attempt to forthify this was the following.  Like Forth code it 
is sequential.  Words that end in : are prefix and take a fixed number 
of arguments (something like Forth definition words).  To group code, () 
it.  ()s in code that is not used by some prefix word are like a Forth 
local declaration; they define variables by eating values from the 
stack.  Note that I wanted to get away from stack manipulation; thus, 
there are no dup, over etc.  The stack is just for implicit parameter 
passing.  I know this makes code longer in general, but I hope it helps 
readability.

In the macros all code is compiled verbatim; :variables are substituted 
with their value at macro invocation time (like , in Lisp), and 
::variables are "spliced in" (like ,@ in Lisp).

macro: when (body) (if: :body ())

macro: loop (i body)
   ((:i b) :i b <=
    when: (label: start ::body :i 1+ (:i) :i b < when: start goto))

5 var: foo
fun: fac ((n ret) n 1 <= when: (1 ret) n 1 - fac n * ret)

fun: map ((fn array ret) array alen (len) len newarray (ar)
   0 len loop: i (array i ·@ fn call ar i a!) ar ret)

# note the implicit argument in the anonymous function
# it expects one number to be on the stack for addition
fun: test ((ret) fn: (ret) (2 + ret) array: (1 2 3 4) map aprint)

This looks somewhat cluttered, I think.  There's less ()s than in Lisp, 
but :s everywhere.  Also, syntax like "fun: fac (..." looks almost as 
ugly as C.

So my second (and last so far) attempt turns
fun: name (code)
into
(FUN name args ... IS code ...)
The explicit () and the IS delimit, so that there can be arguments 
without using ()s everywhere.  Specials are capitalized for better 
readability.  The when macro is useless, since I introduced another 
delimiter (ELSE) in the IF list, again, saving ()s.  -> is used as a 
special form to bind variables (slightly more verbose than before).

(MACRO LOOP i . body IS
   (-> a b) :i b <=
   (IF (LABEL start) ::body :i 1+ (-> :i) :i b < (IF start goto)))

5 (VAR foo)
(FUN fac var n ret IS n 1 <= (IF 1 ret ELSE n 1 - fac n * ret))

(FUN map fn array ret IS array alen (-> len) len newarray (-> ar)
   0 len (LOOP i array i ·@ fn call ar i a!) ar ret)

(FUN test ret IS (FN ret IS 2 + ret) (ARRAY 1 2 3 4) map aprint)

I think having everything INSIDE the ()s makes it more readable.  The IF 
ELSE also looks more Forth-like.

I plan for PostLisp to be more array than list-oriented (except for 
parsing and macros etc.), as the map function suggests, so maybe it 
should be called POAP (POstfix Array Processing).  Don't mix this up 
with the *simple* SOAP language (Simple Object Access Protocol) that 
looks like this:
<soap:Body xmlns:m="http://www.stock.org/stock">
  <m:GetStockPrice>
     <m:StockName>IBM</m:StockName>
  </m:GetStockPrice>
</soap:Body>

with a result of:
<soap:Body xmlns:m="http://www.stock.org/stock">
   <m:GetStockPriceResponse>
     <m:Price>34.5</m:Price>
   </m:GetStockPriceResponse>
</soap:Body>

Isn't it cute?  Translating the above example functions into SOAP is 
left as an exercise to the reader ;)

From: Pascal Bourguignon
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <87mztbokb5.fsf@thalassa.informatimago.com>
Ulrich Hobelmann <···········@web.de> writes:
> [...]
> I plan for PostLisp to be more array than list-oriented (except for
> parsing and macros etc.), as the map function suggests, so maybe it
> should be called POAP (POstfix Array Processing).
> [...]

You should try LOGO.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Wanna go outside.
Oh, no! Help! I got outside!
Let me back inside!
From: Stefan Scholl
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <1hxn9waj9a048.dlg@parsec.no-spoon.de>
On 2005-03-10 08:05:47, Ulrich Hobelmann wrote:

> (Sorry for crossposting this.  If you specifically refer to one language 
> family only, please post the answer only to one group.)
> 
> There are several things I (and others) don't really like about Lisp and 
> Forth, but most live with it.  Today I set out to try a mix of both to 
> see, how certain things improve.
> 
> IMHO the worst thing in Lisp is its many parentheses.

OK. I don't have to read any further.
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39bb3rF5svbs4U1@individual.net>
Stefan Scholl wrote:
> On 2005-03-10 08:05:47, Ulrich Hobelmann wrote:
>>IMHO the worst thing in Lisp is its many parentheses.
> 
> 
> OK. I don't have to read any further.

That's why I labelled it OT and "language experiment".  I don't expect 
working Lisp or Forth programmers to gain anything from this.

If you hate any experimental language that tries different things, then 
indeed it's good that you stopped reading.

The post was not for you.
From: Christopher C. Stacy
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <u1xanv3jn.fsf@news.dtpq.com>
Ulrich Hobelmann <···········@web.de> writes:

> Stefan Scholl wrote:
> > On 2005-03-10 08:05:47, Ulrich Hobelmann wrote:
> >>IMHO the worst thing in Lisp is its many parentheses.
> > OK. I don't have to read any further.
> 
> That's why I labelled it OT and "language experiment".  I don't expect
> working Lisp or Forth programmers to gain anything from this.
> 
> If you hate any experimental language that tries different things,
> then indeed it's good that you stopped reading.
> 
> The post was not for you.

Then why post it in comp.lang.lisp?

(Why do people post "OT" things at all?)
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39cmjmF60q6ooU1@individual.net>
Christopher C. Stacy wrote:
>>If you hate any experimental language that tries different things,
>>then indeed it's good that you stopped reading.
>>
>>The post was not for you.
> 
> 
> Then why post it in comp.lang.lisp?

Because this is not c.l.CL, but c.l.l and I tried a mix of Lisp and Forth?

> (Why do people post "OT" things at all?)

  * Because there is no more specific newsgroup
  * Because there are people who don't just use Lisp as a tool, but who 
study lots of different languages for the heck of it (or to find the 
best language for some job, whatever is good for them...).  To them such 
a post is interesting (at least I got one reply per email so far)
From: Fred Gilham
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <u7ll8uy7jt.fsf@snapdragon.csl.sri.com>
Ulrich Hobelmann <···········@web.de> writes:

> Christopher C. Stacy wrote:
> >>If you hate any experimental language that tries different things,
> >>then indeed it's good that you stopped reading.
> >>
> >>The post was not for you.
> > Then why post it in comp.lang.lisp?
> 
> Because this is not c.l.CL, but c.l.l and I tried a mix of Lisp and Forth?
> 
> > (Why do people post "OT" things at all?)
> 
>   * Because there is no more specific newsgroup
>   * Because there are people who don't just use Lisp as a tool, but
>     who study lots of different languages for the heck of it (or to
>     find the best language for some job, whatever is good for
>     them...).  To them such a post is interesting (at least I got one
>     reply per email so far)


Actually I think your intuition that Lisp and Forth have a lot in
common is quite accurate.  The philosophy is very close (I would use
the term "transparency" to describe it).  But they are two extreme
ways to implement that philosophy.  Both Lisp and Forth try to give
you access to as much as possible.  But Lisp does it in order to be a
vehicle for powerful abstractions.  Forth tries to be a minimalist
programming environment.  Each is, perhaps, a local maximum in the
space of programming languages.

-- 
Fred Gilham                                        ······@csl.sri.com
So: make a smash hit film about a guy who allows himself to be killed
to redeem the world, don't get nominated. Make a modestly successful
film about a gal who asks to be killed because she can't bear to live
as an invalid, win Best Picture. --- Noah Millman on the Oscar Awards
From: Stefan Scholl
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <9qcx684r9r90.dlg@parsec.no-spoon.de>
On 2005-03-10 17:31:55, Ulrich Hobelmann wrote:
> Stefan Scholl wrote:
>> On 2005-03-10 08:05:47, Ulrich Hobelmann wrote:
>>>IMHO the worst thing in Lisp is its many parentheses.
>> 
>> OK. I don't have to read any further.
> 
> That's why I labelled it OT and "language experiment".  I don't expect 
> working Lisp or Forth programmers to gain anything from this.
> 
> If you hate any experimental language that tries different things, then 
> indeed it's good that you stopped reading.

No, you posted to a Lisp newsgroup without any real knowledge of
Lisp. Speaking of parentheses as the worst thing is a flamebait.
From: Svein Ove Aas
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <d0pmhc$48d$1@services.kq.no>
Stefan Scholl wrote:

> On 2005-03-10 08:05:47, Ulrich Hobelmann wrote:
> 
>> (Sorry for crossposting this.  If you specifically refer to one language
>> family only, please post the answer only to one group.)
>> 
>> There are several things I (and others) don't really like about Lisp and
>> Forth, but most live with it.  Today I set out to try a mix of both to
>> see, how certain things improve.
>> 
>> IMHO the worst thing in Lisp is its many parentheses.
> 
> OK. I don't have to read any further.
>
Somewhat along the lines of "Parantheses are a bad thing, except for every
other option", perhaps?
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39bbaiF60mgh7U1@individual.net>
Svein Ove Aas wrote:
> Stefan Scholl wrote:
> Somewhat along the lines of "Parantheses are a bad thing, except for every
> other option", perhaps?

Did you guys even read the post and that I mentioned that there are 
numerous advantages to parentheses, also?
Did you read the OT and the "experiment" in the title?

I was expecting people to explain to me what a shoddy thing I wrote 
down, but I wasn't expecting them to complain about the IMHO very quiet 
questioning of Lisp or Forth disadvantages.

S-Exps work for some people, RPN without any nesting for others; I tried 
to synthesize those two.
From: Christopher C. Stacy
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <uwtsftoqq.fsf@news.dtpq.com>
Ulrich Hobelmann <···········@web.de> writes:

> Svein Ove Aas wrote:
> > Stefan Scholl wrote:
> > Somewhat along the lines of "Parantheses are a bad thing, except for every
> > other option", perhaps?
> 
> Did you guys even read the post and that I mentioned that there are
> numerous advantages to parentheses, also?
> Did you read the OT and the "experiment" in the title?
> 
> I was expecting people to explain to me what a shoddy thing I wrote
> down, but I wasn't expecting them to complain about the IMHO very
> quiet questioning of Lisp or Forth disadvantages.
> 
> S-Exps work for some people, RPN without any nesting for others; I
> tried to synthesize those two.

Here in the Lisp newsgroup, we have come to the conclusion that
parenthesis are superior to Forth RPN.  This goes a long way
towards explaining why we program in Lisp, not in Forth.
Why would you expect that we would be very interested in your
Forth-like language, when we already rejected Forth?

The obvious conclusion is that you are either trolling here, 
or else you don't really understand why we like the parenthesis.

If you're unclear on the technicalities: the point of the parenthesis
is to provide syntax.  Contrary to some popular beliefs, Lisp is not
a language without syntax; the parenthesis are not merely puncutation
to delineate operators.
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39cljrF61r0hpU1@individual.net>
Christopher C. Stacy wrote:
> Here in the Lisp newsgroup, we have come to the conclusion that
> parenthesis are superior to Forth RPN.  This goes a long way
> towards explaining why we program in Lisp, not in Forth.
> Why would you expect that we would be very interested in your
> Forth-like language, when we already rejected Forth?

I wasn't appealing to the working CL programmer, that's why I chose the 
title "[OT] PostLisp, a language experiment".  There's people around 
interested in Lisp, Forth and other languages (and there's Joy and 
Factor which are Lisp- and Forth-like), that's why I post.

> The obvious conclusion is that you are either trolling here, 
> or else you don't really understand why we like the parenthesis.

Neither.  I really like Common Lisp, and intend to become fluent in it, 
since it might be a good tool for lots of purposes (I have more of a 
Scheme-background so far).  I don't even see the ()s when programming. 
I just wanted to try something else, because (a) nested function calls 
don't look too nice like this: (foo (bar (baz bla))), that is RPN 
follows the control flow more closely, and (b) Forth/RPN programs are 
usually shorter and cover less lines which makes them more readable in 
some cases (assuming they are well documented).

> If you're unclear on the technicalities: the point of the parenthesis
> is to provide syntax.  Contrary to some popular beliefs, Lisp is not
> a language without syntax; the parenthesis are not merely puncutation
> to delineate operators.

I know, but that doesn't mean that there aren't other ways to structure 
programs, and I wanted to try a mix between Pre- and Postfix (which are 
equivalent in a way anyway).
From: Gareth McCaughan
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <87mztayrbk.fsf@g.mccaughan.ntlworld.com>
Ulrich Hobelmann wrote:

> I know, but that doesn't mean that there aren't other ways to
> structure programs, and I wanted to try a mix between Pre- and Postfix
> (which are equivalent in a way anyway).

How about something exactly in between prefix and postfix?
Write symbols directly on top of one another. Oh, hang on,
APL already did that. :-)

-- 
Gareth McCaughan
.sig under construc
From: Christopher C. Stacy
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <ull8t9a6t.fsf@news.dtpq.com>
Gareth McCaughan <················@pobox.com> writes:

> Ulrich Hobelmann wrote:
> 
> > I know, but that doesn't mean that there aren't other ways to
> > structure programs, and I wanted to try a mix between Pre- and Postfix
> > (which are equivalent in a way anyway).
> 
> How about something exactly in between prefix and postfix?
> Write symbols directly on top of one another. Oh, hang on,
> APL already did that. :-)

(For anyone who didn't get the joke:)

In APL, operators have no precedence, are either monadic or dyadic
(argument on the right, or argument on both left and right side)
and you simply read them right-to-left.  Easier to parse than infix,
which would be impossible anyway since there are so many operators.
(And all functions have the same syntax, except they're names instead
of special characters like +-*/.)   The constituent characters for
operators is not ASCII, and some of them are composed on the keyboard
by "over striking" (eg: + <backspace> /  makes a + with a slash in it.)

Common Lisp took (among other things) the REDUCE function directly from APL.
In APL, that's a slash (/).  (reduce #'+ '(1 2 3)) is  +/ 1 2 3.

APL was a great language -- my favorite, before I learned Lisp.
It's unusual appeareance was one big reason that APL was not as
popular as it could have been,  Besides, it's an interpreted,
slow language. just like Lisp.   There are many other similarities.

Here in the 21st century we can have all the wacky characters we want,
without having to buy special terminals (eg. special typewriter balls).
Even though the keyboards don't come with special characters on the
keytops, I wonder if languages that use something other than ASCII
can become popular.  For example, the natural math notation as was
suggested by someone here.
From: Pascal Bourguignon
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <877jkfmmpg.fsf@thalassa.informatimago.com>
Ulrich Hobelmann <···········@web.de> writes:
> I was expecting people to explain to me what a shoddy thing I wrote
> down, but I wasn't expecting them to complain about the IMHO very
> quiet questioning of Lisp or Forth disadvantages.

Well, I mentionned LOGO. In essence, LOGO is LISP - parentheses.
 
> S-Exps work for some people, RPN without any nesting for others; I
> tried to synthesize those two.

postfix or suffix are strictly equivalent. 

parentheses or no is an orthogonal matter. 
The question is whether you have variadic operators or not.

Without variadic operators, you don't need parentheses: you know for
each operator how many arguments you need.

With variadic operators, you can use parentheses (systematically) or a
terminator or "marker" for those operators that are variadic (or all
operators).

(defun fact (x) (if (<= x 1) 1 (fact (* x (1- x)))))


Without variadic, postfix, you get RPN: 
You need to write:

    1 2 3 4 + + +                       suffix "rpn"/"forth"
    1 2 + 3 + 4 +

You could write as well:

    + + car cons 1 2 3 4                prefix parenthesis-less lisp

meaning: (+ (+ (car (cons 1 2)) 3) 4)   note the strictly diadic +



With variadic:

    with parenthesis you can write:

           (+ 1 2 3 4)                  prefix "lisp"

    with a marker:

           + 1 2 3 4 ;                  prefix parenthesis-less lisp

    or:    mark 1 2 3 4 var-add         suffix "postscript"


So, you don't need much to remove parentheses from lisp. Only identify
all variadic operations (including &rest, &optional, &key, etc
arguments), and change their signatures to take a fixed number of
argument. You can always put variable number of arguments in a single
list argument which is built with a delimiting marker "endlist",
"mark", ";", whatever.

    replace string "hello" list :start1 pos :end 4 endlist

    defun fact list x endlist if <= x 1 1 * x fact 1- x


Now seriously, is this more readable than:

    (replace string "hello"  :start1 pos :end 4)

    (defun fact (x) (if (<= x 1) 1 (* x (fact (1- x)))))

?

You would need to write it more vertically:

    // prefix:

    defun fact
        list x endlist
        if <= x 1
           1
           * x 
             fact 1- x


    // suffix:

    fact
        mark
           x
        list
        <= x 1
           1
             x 
               x 
               1-
             fact
           *
        if
     defun

and with a lot of spaces...



Now, really if you want a prefix parenthesis-less lisp-like language,
have a look at logo:

http://www.99-bottles-of-beer.net/l.html#Logo   

<a href=http://www.inasec.ca/com/Logo/mainpb.htm>Logo</a> is a 
simple language, suitable for teaching and famed for its 
"turtle" graphics. <a href=http://http.cs.berkeley.edu/~bh/>More info</a>

; by Augusto Chioccariello
to 99bottles
for [i 99 1 -1] [
   print se :i [bottle(s) of beer on the wall]
   print se :i [bottle(s) of beer]
   print [take one down, pass it around]
   print se (:i - 1) [bottle(s) of beer on the wall]]
end

 
-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Until real software engineering is developed, the next best practice
is to develop with a dynamic system that has extreme late binding in
all aspects. The first system to really do this in an important way
is Lisp. -- Alan Kay
From: Trent Buck
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <20050311082946.13c5feb9@harpo.marx>
Spake Pascal Bourguignon:
>     replace string "hello" list :start1 pos :end 4 endlist
> 
>     defun fact list x endlist if <= x 1 1 * x fact 1- x
> 
> You would need to write it more vertically:

I wonder how hard it would be to train an auto-indenter to cope with
paren-less expressions.

-- 
Trent Buck, Student Errant
As I was watching the Sun set over the cliffs...
What model?  And how big a trebuchet did they use? -- TimC, AdB
From: Pascal Bourguignon
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <87mztbksc0.fsf@thalassa.informatimago.com>
Trent Buck <·········@tznvy.pbz> writes:

> Spake Pascal Bourguignon:
> >     replace string "hello" list :start1 pos :end 4 endlist
> > 
> >     defun fact list x endlist if <= x 1 1 * x fact 1- x
> > 
> > You would need to write it more vertically:
> 
> I wonder how hard it would be to train an auto-indenter to cope with
> paren-less expressions.

Not really hard: it would just have to know about the number of
parameter for each operator.  Easy for COMMON-LISP operators. Less
easy for user defined ones but feasible...

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
I need a new toy.
Tail of black dog keeps good time.
Pounce! Good dog! Good dog!
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39cme1F5v6knmU1@individual.net>
Pascal Bourguignon wrote:
> Trent Buck <·········@tznvy.pbz> writes:
> 
> 
>>Spake Pascal Bourguignon:
>>
>>>    replace string "hello" list :start1 pos :end 4 endlist
>>>
>>>    defun fact list x endlist if <= x 1 1 * x fact 1- x
>>>
>>>You would need to write it more vertically:

Yay, the ugliness of Python :D

>>I wonder how hard it would be to train an auto-indenter to cope with
>>paren-less expressions.
> 
> 
> Not really hard: it would just have to know about the number of
> parameter for each operator.  Easy for COMMON-LISP operators. Less
> easy for user defined ones but feasible...
> 

There's a language called REBOL I believe.  It's something like Lisp, 
but doesn't have ()s (i.e. fixed numbers of arguments).  I don't know 
about automatic indentation, though.
From: Gorbag
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <%onYd.9$Du.6@bos-service2.ext.ray.com>
"Pascal Bourguignon" <····@mouse-potato.com> wrote in message
···················@thalassa.informatimago.com...
> Trent Buck <·········@tznvy.pbz> writes:
>
> > Spake Pascal Bourguignon:
> > >     replace string "hello" list :start1 pos :end 4 endlist
> > >
> > >     defun fact list x endlist if <= x 1 1 * x fact 1- x
> > >
> > > You would need to write it more vertically:
> >
> > I wonder how hard it would be to train an auto-indenter to cope with
> > paren-less expressions.
>
> Not really hard: it would just have to know about the number of
> parameter for each operator.  Easy for COMMON-LISP operators. Less
> easy for user defined ones but feasible...

&rest and macros would make life difficult since you won't know which is a
subexpression and which is an uninterpreted argument.

- + 2 3 4 5 7 9

has a number of possible parses (and these are built in operators)
From: Christopher C. Stacy
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <u3bv31rh4.fsf@news.dtpq.com>
Svein Ove Aas <·········@aas.no> writes:

> Stefan Scholl wrote:
> 
> > On 2005-03-10 08:05:47, Ulrich Hobelmann wrote:
> > 
> >> (Sorry for crossposting this.  If you specifically refer to one language
> >> family only, please post the answer only to one group.)
> >> 
> >> There are several things I (and others) don't really like about Lisp and
> >> Forth, but most live with it.  Today I set out to try a mix of both to
> >> see, how certain things improve.
> >> 
> >> IMHO the worst thing in Lisp is its many parentheses.
> > 
> > OK. I don't have to read any further.
> >
> Somewhat along the lines of "Parantheses are a bad thing, 
> except for every other option", perhaps?

Lisp programmers deserve the language they get.
From: Fred Gilham
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <u78y4vo6nm.fsf@snapdragon.csl.sri.com>
> > Somewhat along the lines of "Parantheses are a bad thing, 
> > except for every other option", perhaps?
> 
> Lisp programmers deserve the language they get.

No, we are not worthy . . . .

-- 
Fred Gilham                                        ······@csl.sri.com
   just make me lighter
   make me lighter still
     'til the yellow of the sun takes me

   [oh what Lazarus saw! I cannnot bear this anymore!]
                                                      -- Linshuang Lu
From: Bernd Paysan
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <b776g2-j82.ln1@miriam.mikron.de>
Svein Ove Aas wrote:
>>> IMHO the worst thing in Lisp is its many parentheses.
>> 
>> OK. I don't have to read any further.
>>
> Somewhat along the lines of "Parantheses are a bad thing, except for every
> other option", perhaps?

This has been crossposted to comp.lang.forth. Forth is RPN, therefore the
syntax is completely without parentheses. You can make Lisp-like languages
with RPN, like RPL (the HP calculator Reverse Polish Lisp), or PostScript
(the printer language). They are Lisp-like in so far that they have a
automatic type concept, garbage collection, and so on. I've seen people
implementing other, less well-known reverse polish Lisps (e.g. LIFO and
teatime from Alex Burger).

Here in clf, we see trolls who say "The worst thing about Forth is RPN". I
can see that "the worst thing in Lisp is its many parentheses" is probably
just as offensive ;-). From my point of view, RPN is a valid, and quite
good option for parentheses. It's a matter of taste.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4psy776yl.fsf@franz.com>
Bernd Paysan <············@gmx.de> writes:

> Svein Ove Aas wrote:
> >>> IMHO the worst thing in Lisp is its many parentheses.
> >> 
> >> OK. I don't have to read any further.
> >>
> > Somewhat along the lines of "Parantheses are a bad thing, except for every
> > other option", perhaps?
> 
> This has been crossposted to comp.lang.forth. Forth is RPN, therefore the
> syntax is completely without parentheses. You can make Lisp-like languages
> with RPN, like RPL (the HP calculator Reverse Polish Lisp), or PostScript
> (the printer language). They are Lisp-like in so far that they have a
> automatic type concept, garbage collection, and so on. I've seen people
> implementing other, less well-known reverse polish Lisps (e.g. LIFO and
> teatime from Alex Burger).
> 
> Here in clf, we see trolls who say "The worst thing about Forth is RPN". I
> can see that "the worst thing in Lisp is its many parentheses" is probably
> just as offensive ;-). From my point of view, RPN is a valid, and quite
> good option for parentheses. It's a matter of taste.

Coming from a Lisp perspective of late, but having been well-versed in
both Forth and algol-like (let's just say C) languages I'd like to
break down these generalizations to more specific issues.  I would
dearly like not to come off as a troll; having chosen the Lisp side
for my career, it may appear that way, but Forth also holds a special
place in my heart: when the August, 1978 Byte Magazine came out (the
one which showcased Forth) I was intrigued by its simplicity, and
imediately set out to write a Forth for the Computer Automation
minis my company was using for circuit board testing.   I measured
my success by whether I could play a game of Breakforth (a Breakout-
like game also provided in that issue of Byte).  I succeeded, with
minor modifications needed to simulate a bitmapped display with
the rs232 terminal I had.  I eventually gave away the Forth and the
Breakforth sources at the Computer Automation testing conference that
next year.

Two major areas where I see differences in the way things are grouped,
either using the C style of a mixture of infix and prefix, with
minimal parentheses, or the lisp style of purely prefix, with parens
used always, and the Forth style of using RPN exclusively, are in
the areas of expression evaluation and procedure calling:

Expression evaluation:

C uses infix, and an implied operator precedence, with parens
optional unless they must override precedence.  This is, in my
opinion, unnatural for both humans and computers.  I give it
a zero.

Lisp uses parens everywhere, and leaves nothing to implicit rules.
This means Lots of Irritating Stupid Parens, which might be considered
a negative by people who haven't learned to make them disappear
mentally.  I give it a 1 for precision and controllability.

Forth uses RPN to achieve complete control in expressions.  Some
people don't like RPN, but I have no problem with it here.  I give
Forth a 2 - 1 for precision and controllabilty, and 1 for
simplicity/minimalism.


Procedure calling:

C, Lisp, and Forth invoke functions, functions, and words, respectively,
by matching arguments given at the call point with formal parameters
within the procedure definition in some way.  Each language has similar
ways of dealing with argument matching in proper procedure calls,
although they have different levels in the complexity of argument passing
styles that they support.  However, each language has vastly different
ways of handling argument _mismatches_, which is what I want to
concentrate on here:

C (which uses a prefix notation for function calls and one pair
of parens) does not generally know at runtime how many arguments are
passed.  Newer C compilers tend to view argument mismatch as a compile-time
error, but older compilers allowed compilation of wrong numbers of
arguments, with spectacular results.  For elipses, such as is found in
printf, the varargs technique is used, a macrology that finds arguments
procedurally rather than by matching a name.

Lisp does not use very many parentheses in function calls, but it does
have a concept of a runtime argument count, so actual arguments passed
can be checked at runtime against the number required.  This makes
optional, keyword, and varargs (i.e. &rest) args trivial to implement.
I give it a 2 for simplicity and robustness in the face of incorrect
calls.

Forth [I have to plead ignorance as to what technologies have been
added since 1978, so my score may be too low] uses RPN to match arguments
to a word by pushing them onto a stack and having them be there when
the word is invoked.  It is the word's responsibility to pop the correct
number of arguments off, and the calling word's responsibility to push
the correct number of arguments, so if there is  mismatch, the user stack
will end up either growing or shrinking incorrectly if there is a
mismatch.
Again, there may be new standards in Forth which allow for either
compile-time or run-time argument match checking, but back in 1978, my
greatest challenge in Forth was to know where the stack was at all times.
There were even coding worksheets, which provided a line for each word
as it was executed, and how large the stack was at each point.  I still
use this kind of charting whenever I build byte-interpreters- my
byte-interpreters usually tend to look very Forth-like.
I give Forth a 1 for simplicity, but nothing for robustness in the
face of incorrect calls.  Of course, when I was in the position of
fighting with this I had envisioned a patchable rewrite of the Forth
defining words to add argument checking during debug - perhaps someone
else has also done this, and perhaps argument mismatch has long since
been a solved problem.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: John Doty
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4230C0C6.9020105@whispertel.LoseTheH.net>
Duane Rettig wrote:

> C uses infix, and an implied operator precedence, with parens
> optional unless they must override precedence.  This is, in my
> opinion, unnatural for both humans and computers.  I give it
> a zero.

The problem with this is that infix notation with precedence is the 
result of centuries of intercultural research and negotiation on the 
best way to communicate mathematical ideas. Its strength is revealing 
the relationships among the elements of an expression in a 
human-readable way. It is, however, less good at expressing procedure.

> Forth [I have to plead ignorance as to what technologies have been
> added since 1978, so my score may be too low] uses RPN to match arguments
> to a word by pushing them onto a stack and having them be there when
> the word is invoked.  It is the word's responsibility to pop the correct
> number of arguments off, and the calling word's responsibility to push
> the correct number of arguments, so if there is  mismatch, the user stack
> will end up either growing or shrinking incorrectly if there is a
> mismatch.

But the reality is that experienced Forth programmers have very little 
trouble here. Write short definitions, document stack effect, test 
bottom up. Do that, and this sort of mistake is usually trivial to 
detect and correct. If you're having trouble with stack errors you're 
fighting the language rather than using it.

-jpd
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4hdjj6nj7.fsf@franz.com>
John Doty <···@whispertel.LoseTheH.net> writes:

> Duane Rettig wrote:
> 
> > C uses infix, and an implied operator precedence, with parens
> > optional unless they must override precedence.  This is, in my
> > opinion, unnatural for both humans and computers.  I give it
> > a zero.
> 
> The problem with this is that infix notation with precedence is the
> result of centuries of intercultural research and negotiation on the
> best way to communicate mathematical ideas. Its strength is revealing
> the relationships among the elements of an expression in a
> human-readable way. It is, however, less good at expressing procedure.

Hmm.  You want to bring Mathematics into it, eh?  OK.  Well, do you
really consider these centuries of research and negotiation to be
successful?  Before the explosion of mathematical notations that has
taken place in the 20th century, I suppose things were relatively
stable, but even before that, we had already used up all of the letters
of at least two alphabets, and had several ways of notating math
concepts.  I doubt that you would succeed in finding truly standard
notations for all math concepts.  And, like programming languages,
each has its own strengths and weaknesses.

> > Forth [I have to plead ignorance as to what technologies have been
> > added since 1978, so my score may be too low] uses RPN to match arguments
> > to a word by pushing them onto a stack and having them be there when
> > the word is invoked.  It is the word's responsibility to pop the correct
> > number of arguments off, and the calling word's responsibility to push
> > the correct number of arguments, so if there is  mismatch, the user stack
> > will end up either growing or shrinking incorrectly if there is a
> > mismatch.
> 
> But the reality is that experienced Forth programmers have very little
> trouble here. Write short definitions, document stack effect, test
> bottom up.

Right.  What if I wanted to write longer definitions?  Why is the
language not documenting its own stack?  And why should I have to put
on the straightjacket of bottom-up programming (or testing) when I
can do it top-down, or even sideways, in Lisp?

> Do that, and this sort of mistake is usually trivial to
> detect and correct. If you're having trouble with stack errors you're
> fighting the language rather than using it.

Yes, I saw this right away as an encumbrance on the use of the
language.  It didn't stop me from using it, in the way in which
the language required me to use it, but it did cause me to
continue to search for something better - a language that did
not get in the way of what I wanted to do, in the way I wanted
to do it.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: John Doty
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4232301E.3020707@whispertel.LoseTheH.net>
Duane Rettig wrote:
> John Doty <···@whispertel.LoseTheH.net> writes:
> 
> 
>>Duane Rettig wrote:
>>
>>
>>>C uses infix, and an implied operator precedence, with parens
>>>optional unless they must override precedence.  This is, in my
>>>opinion, unnatural for both humans and computers.  I give it
>>>a zero.
>>
>>The problem with this is that infix notation with precedence is the
>>result of centuries of intercultural research and negotiation on the
>>best way to communicate mathematical ideas. Its strength is revealing
>>the relationships among the elements of an expression in a
>>human-readable way. It is, however, less good at expressing procedure.
> 
> 
> Hmm.  You want to bring Mathematics into it, eh?  OK.  Well, do you
> really consider these centuries of research and negotiation to be
> successful?

Extremely. The development of modern mathematics and modern notation 
have formed a tremendous postive feedback loop for centuries. For 
example, the way it favors sum of products over products of sums 
reflects the reality of polynomial algebra (the sum of products 
representation is always possible, product of sums isn't) and no doubt 
stimulated development of areas like linear algebra. Another example is 
the way that an omitted infix operator can stand for normal 
multiplication, matrix multiplication, or operation. So you can do 
symbolic manipulation without committing to the nature of the symbols, 
then substitute something like (1 + d/dt) for a symbol. Very handy in 
physics.


>>>Forth [I have to plead ignorance as to what technologies have been
>>>added since 1978, so my score may be too low] uses RPN to match arguments
>>>to a word by pushing them onto a stack and having them be there when
>>>the word is invoked.  It is the word's responsibility to pop the correct
>>>number of arguments off, and the calling word's responsibility to push
>>>the correct number of arguments, so if there is  mismatch, the user stack
>>>will end up either growing or shrinking incorrectly if there is a
>>>mismatch.
>>
>>But the reality is that experienced Forth programmers have very little
>>trouble here. Write short definitions, document stack effect, test
>>bottom up.
> 
> 
> Right.  What if I wanted to write longer definitions?

Why, to make your code unreadable? Short definitions are desirable in 
any programming language. Forth has fewer barriers to short definitions 
than others, so it's simply silly to write long ones.

>  Why is the
> language not documenting its own stack?

Because what counts is what the items on the stack *represent*. That's 
the programmer's responsibility, not the language's. If you've 
documented what each item is, surely you can count them. Especially if 
your definitions are short.

>  And why should I have to put
> on the straightjacket of bottom-up programming (or testing) when I
> can do it top-down, or even sideways, in Lisp?

Well, I generally try to design top down. On the other hand, one tests 
bottom up for the same reason one builds a house bottom up: what matters 
is how each piece sits on the pieces that hold it up!

>>Do that, and this sort of mistake is usually trivial to
>>detect and correct. If you're having trouble with stack errors you're
>>fighting the language rather than using it.
> 
> 
> Yes, I saw this right away as an encumbrance on the use of the
> language.  It didn't stop me from using it, in the way in which
> the language required me to use it, but it did cause me to
> continue to search for something better - a language that did
> not get in the way of what I wanted to do, in the way I wanted
> to do it.

It seems to me that you should choose a tool for the job it does, not 
its conformance to your prejudices. For the x-ray imaging spectrometer 
project I'm working on, Guile does one part, Forth another, C another 
(and there's some VHDL, too). Depends on requrements, interfaces, 
opportunities for reuse, ease of scripting, ease of explanation to 
users, etc.

-jpd
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4psy5bn89.fsf@franz.com>
John Doty <···@whispertel.LoseTheH.net> writes:

> Duane Rettig wrote:
> > John Doty <···@whispertel.LoseTheH.net> writes:
> >
> 
> >>Duane Rettig wrote:

> >>>Forth [I have to plead ignorance as to what technologies have been
> >>>added since 1978, so my score may be too low] uses RPN to match arguments
> >>>to a word by pushing them onto a stack and having them be there when
> >>>the word is invoked.  It is the word's responsibility to pop the correct
> >>>number of arguments off, and the calling word's responsibility to push
> >>>the correct number of arguments, so if there is  mismatch, the user stack
> >>>will end up either growing or shrinking incorrectly if there is a
> >>>mismatch.
> >>
> >>But the reality is that experienced Forth programmers have very little
> >>trouble here. Write short definitions, document stack effect, test
> >>bottom up.
> > Right.  What if I wanted to write longer definitions?
> 
> 
> Why, to make your code unreadable?

I guess it depends on your definition of short.  But if your language
doesn't allow you to write a definition that is longer than normal,
without the code becoming unreadable, then that is one mark against
your language.

> Short definitions are desirable in
> any programming language. Forth has fewer barriers to short
> definitions than others, so it's simply silly to write long ones.

I doubt this.  If your definitions of "barriers" is shorter names
and less syntax, then I would call it "more barriers to long
definitions" rather than "fewer barriers to short definitions".
Lisp programmers tend to choose longer, more descriptive names
over shorter, more cryptic ones.  And we have an automatic tool
within the language that provides for stylized output of code
(since it can be represented, and thus printed, as data).  So
Lisp actually provides fewer barriers to longer definitions than
other languages.

Now, of course, I'm not advocating long definitions.  A definition
is too long when it can't be mentally pictured as a module in the
programmers imagination.  But many times, the only reason for
breaking up a function is not for modularity, but because "it is
too long", by someone's arbitrary standard.  Lisp allows me to
avoid breaking up such functions in these cases, where the length
of the function in fact does not contribute to unreadability.
Of course, if you've never seen a readable long function, you'll
doubtless argue with me, but that's alright; it just proves my
point that in Lisp I can have long, readable functions.

> >  Why is the
> > language not documenting its own stack?
> 
> Because what counts is what the items on the stack *represent*. That's
> the programmer's responsibility, not the language's. If you've
> documented what each item is, surely you can count them. Especially if
> your definitions are short.

I always use my language to document what my arguments and return
values represent.  In Lisp, there is seldom any reason to document
arguments, and since lisp programs are usually functional in nature,
there are very few times, percentwise, that multiple values are
returned.

But my point is not in the documentation of these items, but in the
extremely unforgiving nauture of Forth in matching up these arguments
and returns.  In Lisp, the stack is generally incremented and
decremented, at least conceptually, on a per-call basis, rather than
on a per value basis.  So there is never any need to worry about
the stack getting one-off in its execution, especially if you hot-swap
a function while running with a new function that accepts a different
number of arguments.

> >  And why should I have to put
> > on the straightjacket of bottom-up programming (or testing) when I
> > can do it top-down, or even sideways, in Lisp?
> 
> Well, I generally try to design top down. On the other hand, one tests
> bottom up for the same reason one builds a house bottom up: what
> matters is how each piece sits on the pieces that hold it up!

Not in Lisp.  One can create _and_ _test_ functionality very easily
from the top down.  It is interesting that you like to program
top-down; this suggests, if you are then forced to test bottom-up,
that your testing methodology doesn't match your design methodology.
Now, you haven't said what your implementation methodology is; I
suspect that it is also bottom-up, with unit testing close behind,
which creates a mismatch between the design phase and the
implementation/test phase.  Or, it might be that you like to
implement top-down as well, which then creates a mismatch between
the implementation and the tes phase.  Either way, you are then
forced to separate your design from your testing in such a manner
that, on large enough projects, you end up forgetting the details
of your design by the time your code is ready for testing.
In Lisp, I can define higher level functions, and I can test them
right away; I don't even have to define the lower level functions that
they call (if one is called and is not yet defined, I get an error
in a prompt, to which I can respond either by defining the function
and continuing, or by supplying a value tempoaraily in place of
the function call).

> >>Do that, and this sort of mistake is usually trivial to
> >>detect and correct. If you're having trouble with stack errors you're
> >>fighting the language rather than using it.
> > Yes, I saw this right away as an encumbrance on the use of the
> > language.  It didn't stop me from using it, in the way in which
> > the language required me to use it, but it did cause me to
> > continue to search for something better - a language that did
> > not get in the way of what I wanted to do, in the way I wanted
> > to do it.
> 
> It seems to me that you should choose a tool for the job it does, not
> its conformance to your prejudices. For the x-ray imaging spectrometer
> project I'm working on, Guile does one part, Forth another, C another
> (and there's some VHDL, too). Depends on requrements, interfaces,
> opportunities for reuse, ease of scripting, ease of explanation to
> users, etc.

Yes, I agree with you here, although you hardly know any of my
prejudices.  Here, we do indeed use Forth-like languages for
some bootstrap loaders (they are actually supplied for us on the
operating systems we use), but there are very few other areas where
Lisp is not capable of doing the job nicely.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Elizabeth D Rather
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <11351pqnac49nf6@news.supernews.com>
"Duane Rettig" <·····@franz.com> wrote in message 
··················@franz.com...
> John Doty <···@whispertel.LoseTheH.net> writes:
>
>> > ... What if I wanted to write longer definitions?
>>
>> Why, to make your code unreadable?
>
> I guess it depends on your definition of short.  But if your language
> doesn't allow you to write a definition that is longer than normal,
> without the code becoming unreadable, then that is one mark against
> your language.

It isn't so much a matter of "allowing" as simply a natural desire to make 
the programming chore simpler and the result more reliable.  "Good style" in 
Forth implies extreme modularity (many definitions only a few lines long) 
and no repeated code sequences.  This makes programs smaller (a good thing 
in embedded systems) and more reliable (since it's not only easier to test 
short, simple definitions, it avoids the problem of fixing a bug  <n> times 
because the code sequence is repeated all over the place).

>> Short definitions are desirable in
>> any programming language. Forth has fewer barriers to short
>> definitions than others, so it's simply silly to write long ones.
>
> I doubt this.  If your definitions of "barriers" is shorter names
> and less syntax, then I would call it "more barriers to long
> definitions" rather than "fewer barriers to short definitions".
> Lisp programmers tend to choose longer, more descriptive names
> over shorter, more cryptic ones.  And we have an automatic tool
> within the language that provides for stylized output of code
> (since it can be represented, and thus printed, as data).  So
> Lisp actually provides fewer barriers to longer definitions than
> other languages.

No, the "barriers" in question include the overhead of processing calling 
sequences, saving and restoring registers, etc., plus the programming 
overhead of declaring things and other procedures common at the beginning 
and end of subroutines.  Although one could say that both Forth and C pass 
parameters on a stack, the big difference is that a C compiler generates 
code to arrange the parameters where it wants them, while in Forth they're 
just there (typically left as the result of prior operations).  Although 
Forth implementations vary, in the systems I use the only "overhead" 
involved in calling a word is literally the call/return instructions.  Less 
syntax, for sure, but names have nothing to do with it.

> Now, of course, I'm not advocating long definitions.  A definition
> is too long when it can't be mentally pictured as a module in the
> programmers imagination.  But many times, the only reason for
> breaking up a function is not for modularity, but because "it is
> too long", by someone's arbitrary standard.

I certainly agree that arbitrary whacking of code into meaningless segments 
is unhelpful, but it's usually possible to identify meaningful units of 
logic, particularly if they are frequently used.  In most languages folks 
are reluctant to factor common short phrases of, say, 3-5 commands, but 
re-use them frequently as "cliches".  In Forth you'd make even such short 
things into a word and use it.  If such a fragment is given a clear name, it 
helps readability.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
================================================== 
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4r7ikzwwq.fsf@franz.com>
"Elizabeth D Rather" <··········@forth.com> writes:

> "Duane Rettig" <·····@franz.com> wrote in message
> ··················@franz.com...
> 
> > John Doty <···@whispertel.LoseTheH.net> writes:
> >
> >> > ... What if I wanted to write longer definitions?
> >>
> >> Why, to make your code unreadable?
> >
> > I guess it depends on your definition of short.  But if your language
> > doesn't allow you to write a definition that is longer than normal,
> > without the code becoming unreadable, then that is one mark against
> > your language.
> 
> It isn't so much a matter of "allowing" as simply a natural desire to
> make the programming chore simpler and the result more reliable.

A worthy goal that we all share, though we apparently disagree on
how to do it.  My point it that it need not be shorter in order to
be simpler and more reliable _and_ maintainable.

> "Good style" in Forth implies extreme modularity (many definitions
> only a few lines long) and no repeated code sequences.

Herein lies my problem with the statements coming from the Forth
side: whenever I find extremism of any kind in language concepts,
I find that that dogma tends to be overused and thus taken further
than need be, with opposite results than intended.  Note that I
only used the word "extreme" because you did, but yours is actually
the less extreme answer than those before you, and I do appreciate
the measured response.  I actually was thinking of the word "dogma"
rather than extremism, because it fits the style of thinking I try
to stay away from.

On the Lisp side, we have our dogmatists as well, but Common Lisp
in particular does not require such dogma to get things done and
done well.  Instead, it is multiparadigm, and allows for modularity
with the occasional required rats-nest; it allows for many small
definitions or few, and it has a macro system that allows no good
phrase to be said twice (but it doesn't require it).

  This makes
> programs smaller (a good thing in embedded systems) and more reliable
> (since it's not only easier to test short, simple definitions, it
> avoids the problem of fixing a bug  <n> times because the code
> sequence is repeated all over the place).

These are all noble goals, and Common Lisp has them in common with
Forth.  I was, however, stating differences between Forth and Lisp
in my original response.

> >> Short definitions are desirable in
> >> any programming language. Forth has fewer barriers to short
> >> definitions than others, so it's simply silly to write long ones.
> >
> > I doubt this.  If your definitions of "barriers" is shorter names
> > and less syntax, then I would call it "more barriers to long
> > definitions" rather than "fewer barriers to short definitions".
> > Lisp programmers tend to choose longer, more descriptive names
> > over shorter, more cryptic ones.  And we have an automatic tool
> > within the language that provides for stylized output of code
> > (since it can be represented, and thus printed, as data).  So
> > Lisp actually provides fewer barriers to longer definitions than
> > other languages.
> 
> No, the "barriers" in question include the overhead of processing
> calling sequences, saving and restoring registers, etc., plus the
> programming overhead of declaring things and other procedures common
> at the beginning and end of subroutines.  Although one could say that
> both Forth and C pass parameters on a stack, the big difference is
> that a C compiler generates code to arrange the parameters where it
> wants them, while in Forth they're just there (typically left as the
> result of prior operations).  Although Forth implementations vary, in
> the systems I use the only "overhead" involved in calling a word is
> literally the call/return instructions.

There is at least some overhead involved in storing and retrieving a
value from the user stack.  In both Lisp and C, where the architecture
allows it, arguments don't even need to be stored into the stack; they
can be simply passed in registers.  And in Lisp, these values can also
be passed back out:

On a Mac:

CL-USER(1): (declaim (optimize speed (safety 0) (debug 0)))
T
CL-USER(2): (compile (defun foo (a b c) (values a b c)))
FOO
NIL
NIL
CL-USER(3): (disassemble 'foo)
;; disassembly of #<Function FOO>
;; formals: [none]

;; code start: #x56463dc:
   0: 3a000003     [addi]  lil r16,3
   4: 81a1000c             lwz r13,12(r1)
   8: 4e800020             blr
CL-USER(4): 

Note that there is no data movement here; the first three arguments,
in registers r3, r4, and r5, are not touched as they are passed back
as-is.  The first instruction in this example, which is setting a
values-returned count register, could probably have been elided as well,
if a policy were in effect to trust that the number of arguments coming
in (also noted by r16 in this case) was correct.  But we don't make such
a policy; instead we choose to ensure that the correct number of values
are returned, regardless of how many arguments were actually given.  In
a (safety 0) situation, where the number of arguments is incorrect, there
could be junk returned, if not enough arguments had been given, but at
least the integrity of the stack is maintained.  This same stack integrity
is also maintained when arguments are actually passed on the stack with
standard architecture calling-sequence methods.

>  Less syntax, for sure, but
> names have nothing to do with it.

Sure they do.  Whenever you associate an argument with a formal or
informal parameter, you are naming it, whether you call it b, (arg 1),
or 1 PICK.  And after all this efficiency talk, my issue is not with
the speed of association between implied arguments within words and
their calls, but the associations themselves, and the tendency to get
them misaligned and thus throw the stack off.  When I left the Forth
world in the mid '80s, I had not gotten into the relatively new compiler
technology in Forth, nor have I kept up with it, so I did not know
if there had been any advancements in fixing this inherent problem with
the language.  And apparently, it is not something that is high on the
minds of Forthers to fix.

> > Now, of course, I'm not advocating long definitions.  A definition
> > is too long when it can't be mentally pictured as a module in the
> > programmers imagination.  But many times, the only reason for
> > breaking up a function is not for modularity, but because "it is
> > too long", by someone's arbitrary standard.
> 
> I certainly agree that arbitrary whacking of code into meaningless
> segments is unhelpful, but it's usually possible to identify
> meaningful units of logic, particularly if they are frequently used.
> In most languages folks are reluctant to factor common short phrases
> of, say, 3-5 commands, but re-use them frequently as "cliches".  In
> Forth you'd make even such short things into a word and use it.  If
> such a fragment is given a clear name, it helps readability.

Yes, of course; and in Common Lisp we have functions which can be used in
the same way, or even better, we can use macros, so that the overhead
of the expansion of that word at runtime can be eliminated at compile-time.

But again, I consider this to be a similarity between the languages,
not a difference; the difference I would be concerned about is the
"3-5 commands" bit - when Mr Doty was talking about short words, I
was thinking more along the lines of a prettily indented screen full,
not 3-5 words.  Is that _really_ what you people are talking about
when you advocate short definitions?  Has Forth practice changed
that much since the days when 1 to 5 words might be defined in
a screen?

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Elizabeth D Rather
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <1136rqq73npjb44@news.supernews.com>
"Duane Rettig" <·····@franz.com> wrote in message 
··················@franz.com...
> "Elizabeth D Rather" <··········@forth.com> writes:
>
>> > I guess it depends on your definition of short.  But if your language
>> > doesn't allow you to write a definition that is longer than normal,
>> > without the code becoming unreadable, then that is one mark against
>> > your language.
>>
>> It isn't so much a matter of "allowing" as simply a natural desire to
>> make the programming chore simpler and the result more reliable.
>
> A worthy goal that we all share, though we apparently disagree on
> how to do it.  My point it that it need not be shorter in order to
> be simpler and more reliable _and_ maintainable.

A relevant thought-model might be the prose style of Hemingway vs. , say, 
Nathaniel Hawthorne.  Forthers would claim that Hemingway is much easier to 
follow.

>> "Good style" in Forth implies extreme modularity (many definitions
>> only a few lines long) and no repeated code sequences.
>
> Herein lies my problem with the statements coming from the Forth
> side: whenever I find extremism of any kind in language concepts,
> I find that that dogma tends to be overused and thus taken further
> than need be, with opposite results than intended.  Note that I
> only used the word "extreme" because you did, but yours is actually
> the less extreme answer than those before you, and I do appreciate
> the measured response.  I actually was thinking of the word "dogma"
> rather than extremism, because it fits the style of thinking I try
> to stay away from.

Once again, I entirely agree that dogma tends to be overused and that's a 
bad thing.  However, every language does tend to facilitate and encourage a 
particular style, and following the style a language is optimized for is 
usually a good thing.

>>
>> ... the "barriers" in question include the overhead of processing
>> calling sequences, saving and restoring registers, etc., plus the
>> programming overhead of declaring things and other procedures common
>> at the beginning and end of subroutines.  Although one could say that
>> both Forth and C pass parameters on a stack, the big difference is
>> that a C compiler generates code to arrange the parameters where it
>> wants them, while in Forth they're just there (typically left as the
>> result of prior operations).  Although Forth implementations vary, in
>> the systems I use the only "overhead" involved in calling a word is
>> literally the call/return instructions.
>
> There is at least some overhead involved in storing and retrieving a
> value from the user stack.

As Paul Bennett correctly observes, things aren't necessarily put onto the 
stack specifically for a call, they are simply there as a result of prior 
operations, and left where they'll be most easily accessed for the next.

> In both Lisp and C, where the architecture
> allows it, arguments don't even need to be stored into the stack; they
> can be simply passed in registers.  And in Lisp, these values can also
> be passed back out:

Yes, Forth compilers will use registers sometimes, as well.  It's quite 
common for the top stack item to reside in a register, for example.  Much 
depends, of course, on the number of registers available on a particular 
platform.  A Forth implementation is usually designed to make the most 
efficient use of whatever facilities are available in a particular 
processor.

...
>> I certainly agree that arbitrary whacking of code into meaningless
>> segments is unhelpful, but it's usually possible to identify
>> meaningful units of logic, particularly if they are frequently used.
>> In most languages folks are reluctant to factor common short phrases
>> of, say, 3-5 commands, but re-use them frequently as "cliches".  In
>> Forth you'd make even such short things into a word and use it.  If
>> such a fragment is given a clear name, it helps readability.
>
> Yes, of course; and in Common Lisp we have functions which can be used in
> the same way, or even better, we can use macros, so that the overhead
> of the expansion of that word at runtime can be eliminated at 
> compile-time.

Well, there isn't any "expansion of [a] word at runtime" in Forth, just a 
call to it, since everything is compiled.  Many implementations to expand 
short primitives in place, though, since the relative cost of a call to a 
routine 3 instructions long is sort of high.

> But again, I consider this to be a similarity between the languages,
> not a difference; the difference I would be concerned about is the
> "3-5 commands" bit - when Mr Doty was talking about short words, I
> was thinking more along the lines of a prettily indented screen full,
> not 3-5 words.  Is that _really_ what you people are talking about
> when you advocate short definitions?  Has Forth practice changed
> that much since the days when 1 to 5 words might be defined in
> a screen?

Well, 3-5 words referenced in a single definition is at the low end, indeed, 
but a definition that occupies 10+ lines is relatively unusual in my 
practice.  I don't see that as a change in practice, at least in my 
experience (which goes back to 1971), but one's mileage may vary.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
================================================== 
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39h9piF63hgmoU1@individual.net>
Elizabeth D Rather wrote:
>> A worthy goal that we all share, though we apparently disagree on
>> how to do it.  My point it that it need not be shorter in order to
>> be simpler and more reliable _and_ maintainable.
> 
> 
> A relevant thought-model might be the prose style of Hemingway vs. , 
> say, Nathaniel Hawthorne.  Forthers would claim that Hemingway is much 
> easier to follow.

Having read the first two pages of the scarlet letter, I consider 
Hawthorne unreadable.  It smells of arrogance when someone has to 
complicate even simple sentences needlessly by inserting words that 
nobody ever uses.  (And yes, I've read *some* literature from older 
times.)  I wouldn't say everything has to look like Hemingway, though.
From: Bernd Paysan
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <91ecg2-poe.ln1@vimes.paysan.nom>
Ulrich Hobelmann wrote:
> Having read the first two pages of the scarlet letter, I consider
> Hawthorne unreadable.

I just searched for an online-version, and I can only agree. It reminds me
on the "too many variables" picture in Thinking Forth. I didn't know that
you can write English like that.

> It smells of arrogance when someone has to 
> complicate even simple sentences needlessly by inserting words that
> nobody ever uses.

If you compare Hawthrone's style to programming languages, Common Lisp comes
in mind. Many parentheses, many functions no other system has.

> (And yes, I've read *some* literature from older 
> times.)  I wouldn't say everything has to look like Hemingway, though.

Hemmingway is a minimalist. If everyone wrote like him, it would be rather
dull.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39hhtjF61p9ibU1@individual.net>
Bernd Paysan wrote: (after having agreed that Howthorne's style sucks)
> If you compare Hawthrone's style to programming languages, Common Lisp comes
> in mind. Many parentheses, many functions no other system has.

Yay, some more obvious trolling!  (with a cleverly set Follow-up)

What is bad about features that "no other system" has?
Special variables are used in many ways that resemble the use of Forth 
global vars (at least in the Thinking Forth examples).
Most other features are probably present in one or another programming 
language.  I won't comment on the parentheses; they are many, well...

> Hemmingway is a minimalist. If everyone wrote like him, it would be rather
> dull.

Ack.
From: Bernd Paysan
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <6gsdg2-f3a.ln1@vimes.paysan.nom>
Ulrich Hobelmann wrote:
> Yay, some more obvious trolling!  (with a cleverly set Follow-up)

;-)

> What is bad about features that "no other system" has?

Nothing special. Common Lisp simply has so many unique features that its
main proponent, Richard Gabriel, came to the conclusion that this is too
much, and wrote a paper which titles "worse is better". Each of the unique
features in isolation is a good idea, all of them together in a single
programming language are too many.

> Special variables are used in many ways that resemble the use of Forth
> global vars (at least in the Thinking Forth examples).
> Most other features are probably present in one or another programming
> language.  I won't comment on the parentheses; they are many, well...

There are many in other, leaner Lisp dialects like Scheme, too. On the other
hand, there are functional programming languages with less, like OCaml.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Pascal Costanza
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39j4c1F60geh0U1@individual.net>
Bernd Paysan wrote:
> Ulrich Hobelmann wrote:

>>What is bad about features that "no other system" has?
> 
> Nothing special. Common Lisp simply has so many unique features that its
> main proponent, Richard Gabriel, came to the conclusion that this is too
> much, and wrote a paper which titles "worse is better". Each of the unique
> features in isolation is a good idea, all of them together in a single
> programming language are too many.

No, that's not his conclusion. Better read the whole story at 
http://www.dreamsongs.com/WorseIsBetter.html not just that single essay.


Pascal
From: Bernd Paysan
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <p0beg2-h8c.ln1@vimes.paysan.nom>
Pascal Costanza wrote:

> Bernd Paysan wrote:
>> Ulrich Hobelmann wrote:
> 
>>>What is bad about features that "no other system" has?
>> 
>> Nothing special. Common Lisp simply has so many unique features that its
>> main proponent, Richard Gabriel, came to the conclusion that this is too
>> much, and wrote a paper which titles "worse is better". Each of the
>> unique features in isolation is a good idea, all of them together in a
>> single programming language are too many.
> 
> No, that's not his conclusion. Better read the whole story at
> http://www.dreamsongs.com/WorseIsBetter.html not just that single essay.

There's no single conclusion. The essays are really worth to read. From a
Forther's point of view, we share his feelings about "worse" languages like
C or C++ dominating just as well. We all see the mob running behind some
emperor without clothes, and people who would like to see our successful
software rewritten in C, because ... uhm, because ... er, ...

But from a discussion point of view, we Forthers are (deliberately) far
"worse" than C and Unix. We shun abstractions, we remove them where we can.
We simplify concepts even when the simplification is very obvious to the
user, and demand the user to change habits. We accuse others of attacking
problems that shouldn't be solved at all.

Or in terms of the "Worse is Better" essay, the design characteristics of a
Forth program are clearly the "New Jersey" kind of approach (though Chuck
Moore has a MIT degree http://www.colorforth.com/mit.jpg). We simply do not
agree with Richard that C is written with the New Jersey approach in mind,
it's far too complex. It's possible to write a much simpler compiler than a
C compiler (a Forth compiler ;-). Or Unix. Unix provides a number of
complex things that are rarely necessary. E.g. multi-user security from
malicious programs. Or files that always look like streams of bytes. Silly
abstractions.

The Forth OS was an exo-kernel 30 years before the name was coined. There
are basically two abstractions you need for an exokernel: tasks and blocks
(pages mapped into the address space). These two components are universal,
i.e. you can construct everything you like when you need it. When you
don't, stop the temptation. We Forthers think the simplest possible
solution is "the right way".

BTW: The MIT does very interesting exokernel research, too:
http://www.pdos.lcs.mit.edu/exo.html

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Paul E. Bennett
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <d0vp43$j7n$1$8302bc10@news.demon.co.uk>
Duane Rettig wrote:

>> No, the "barriers" in question include the overhead of processing
>> calling sequences, saving and restoring registers, etc., plus the
>> programming overhead of declaring things and other procedures common
>> at the beginning and end of subroutines.  Although one could say that
>> both Forth and C pass parameters on a stack, the big difference is
>> that a C compiler generates code to arrange the parameters where it
>> wants them, while in Forth they're just there (typically left as the
>> result of prior operations).  Although Forth implementations vary, in
>> the systems I use the only "overhead" involved in calling a word is
>> literally the call/return instructions.
> 
> There is at least some overhead involved in storing and retrieving a
> value from the user stack.  In both Lisp and C, where the architecture
> allows it, arguments don't even need to be stored into the stack; they
> can be simply passed in registers.  And in Lisp, these values can also
> be passed back out:

Well factored Forth programmes would ensure that the required data is just 
there on the stack ready to be consumed by the callee word. The callee word 
leaves its results on the stack ready for the next word in line to pick it 
up from there. Which is why most of us will spend a bit of time thinking 
about the problem and the best way to structure the code before we even 
write the first line. In this respect, Forth helps to train the mind into 
proper observation of the problem space so that the most elegant, logical 
and apposite solution can be discovered.

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: billy
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1111009026.149355.146870@z14g2000cwz.googlegroups.com>
Paul E. Bennett wrote:
> Duane Rettig wrote:
> > There is at least some overhead involved in storing and retrieving
a
> > value from the user stack.

I didn't notice this in the original post -- sorry. This is partially
true, and is an important part of why it's good that programmers hate
stack noise: every time you decide to rework your algorithm to
eliminate stack noise, you also reduce your user stack accesses.
Clean-looking code in Forth is efficient code. Isn't that a pleasant
change?

The reason it's only partially true is that it's very simple to write a
register optimiser for a stack. In some ways it's easier than it would
be for a parameter-based language like Lisp or C -- the Forth
programmer will have been very careful to line up his arguments in the
most visually pleasing way, which is also the most efficient way. Thus,
the stack is already ordered in terms of rapidity of need; the more
quickly needed items are at the top. Place the top items in registers,
the next-to-top in fast memory, and the rest in slow memory, and you've
got a huge speedup for no real analysis cost. (Of course, a real
compiler will also perform dataflow analysis, but that's the same for
all languages.)

> > In both Lisp and C, where the architecture
> > allows it, arguments don't even need to be stored into the stack;
they
> > can be simply passed in registers.  And in Lisp, these values can
also
> > be passed back out:

Ditto for Forth. Ditto for any language with a dataflow-analysing
compiler. Dataflow in Forth is explicit, so it's perhaps a tiny bit
easier to analyse, but I don't think anyone notices that tiny of an
amount (compared to the difficulty of analysis and optimization, that
is).

> up from there. Which is why most of us will spend a bit of time
thinking
> about the problem and the best way to structure the code before we
even
> write the first line. In this respect, Forth helps to train the mind
into
> proper observation of the problem space so that the most elegant,
logical
> and apposite solution can be discovered.

I actually replied to this to agree with your point, and add that the
solution that makes the text look cleanest is not just superficially
clean -- it's fundamentally cleaner, faster, and so on.

> Paul E. Bennett ....................<···········@amleth.demon.co.uk>

-Billy
From: Steven E. Harris
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <83u0ngn442.fsf@torus.sehlabs.com>
Duane Rettig <·····@franz.com> writes:

> In Lisp, I can define higher level functions, and I can test them
> right away; I don't even have to define the lower level functions
> that they call (if one is called and is not yet defined, I get an
> error in a prompt, to which I can respond either by defining the
> function and continuing, or by supplying a value tempoaraily in
> place of the function call).

That's a great feature I haven't used, so I decided to try it
out. Sadly, this last restart -- supplying a value in lieu of the
undefined call -- is not available in CLISP. The former one --
supplying a function definition -- is available, but I can't get it to
work. Perhaps a bug report is due.

Both restarts did work as expected in Allegro CL, which I was able to
try over telnet to Franz.�


Footnotes: 
� http://www.franz.com/products/allegrocl/prompt/

-- 
Steven E. Harris
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39gshuF5uv2g1U1@individual.net>
Duane Rettig wrote:
>>Short definitions are desirable in
>>any programming language. Forth has fewer barriers to short
>>definitions than others, so it's simply silly to write long ones.
> 
> 
> I doubt this.  If your definitions of "barriers" is shorter names
> and less syntax, then I would call it "more barriers to long
> definitions" rather than "fewer barriers to short definitions".
> Lisp programmers tend to choose longer, more descriptive names
> over shorter, more cryptic ones.  And we have an automatic tool
> within the language that provides for stylized output of code
> (since it can be represented, and thus printed, as data).  So
> Lisp actually provides fewer barriers to longer definitions than
> other languages.

Both Lisp and Forth allow a functions code inserted almost as-is: Forth 
because it's concatenative in a way, Lisp because it's lexically scoped 
and parenthesized.  The difference is that in Lisp most variables are 
named which helps decipher large functions I think.

However I agree that factoring into small functions should be practiced 
more (in most languages), and in Forth it is (maybe because it's 
necessary).  The rule should be that one function does one thing only. 
I think most (good) programmers know and do this, so functions should 
rarely get too big.

> Now, of course, I'm not advocating long definitions.  A definition
> is too long when it can't be mentally pictured as a module in the
> programmers imagination.  But many times, the only reason for
> breaking up a function is not for modularity, but because "it is
> too long", by someone's arbitrary standard.  Lisp allows me to
> avoid breaking up such functions in these cases, where the length
> of the function in fact does not contribute to unreadability.
> Of course, if you've never seen a readable long function, you'll
> doubtless argue with me, but that's alright; it just proves my
> point that in Lisp I can have long, readable functions.

Interestingly I would say that a long function in Lisp is readable iff 
it's code is mostly sequential in nature, not if it's something like
(foo bar baz (let ...
		(eueue()()()(..... with a large blob of code there...

So for increasing function length the importance of naming mechanisms 
like "let" and sequential code behavior increases.

>>> And why should I have to put
>>>on the straightjacket of bottom-up programming (or testing) when I
>>>can do it top-down, or even sideways, in Lisp?
>>
>>Well, I generally try to design top down. On the other hand, one tests
>>bottom up for the same reason one builds a house bottom up: what
>>matters is how each piece sits on the pieces that hold it up!
> 
> [... top-down-testing works in Lisp ...]

I would be interested, how this top-down testing works, though.  In 
every programming language (ML, Scheme, Forth, C, Java) I usually do 
bottom-up testing for the above mentioned reasons.  It usually leaves my 
code without any need for large debugging sessions.
From: Paul E. Bennett
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <d0urs3$sv0$1$8300dec7@news.demon.co.uk>
Duane Rettig wrote:

>> >>But the reality is that experienced Forth programmers have very little
>> >>trouble here. Write short definitions, document stack effect, test
>> >>bottom up.
>> > Right.  What if I wanted to write longer definitions?
>> 
>> 
>> Why, to make your code unreadable?
> 
> I guess it depends on your definition of short.  But if your language
> doesn't allow you to write a definition that is longer than normal,
> without the code becoming unreadable, then that is one mark against
> your language.

Why would anyone want to write longer definitions than is absolutely 
necessary.

Take for example the cliche    DUP *

This gives you the square of the number at TOS. In certain applications it 
might be used a large number of times in different definitions. Hence we 
might define:-

: SQUARED (S n -- n^2)
(G n^2 is the resultant of the number n multiplied by itself. )
(  Limitations: the number n can be no larger than the square )
(  root of an n^2 which occupies all the bits of a cell).
   DUP * ;

There is very little cost in doing this as the call structure of Forth was 
built in at a very fundamental level and is in common enough usage that 
this was the first part of Forth to be considered for opitimisation on any 
real machine. Would you have it that we should just include the DUP * 
cliche instead of defining and using SQUARED. How much longer does it need 
to be?

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4ll8szw8p.fsf@franz.com>
"Paul E. Bennett" <···@amleth.demon.co.uk> writes:

> Duane Rettig wrote:
> 
> >> >>But the reality is that experienced Forth programmers have very little
> >> >>trouble here. Write short definitions, document stack effect, test
> >> >>bottom up.
> >> > Right.  What if I wanted to write longer definitions?
> >> 
> >> 
> >> Why, to make your code unreadable?
> > 
> > I guess it depends on your definition of short.  But if your language
> > doesn't allow you to write a definition that is longer than normal,
> > without the code becoming unreadable, then that is one mark against
> > your language.
> 
> Why would anyone want to write longer definitions than is absolutely 
> necessary.

Well, you just did (and thanks for making my point very elegantly):

> Take for example the cliche    DUP *
> 
> This gives you the square of the number at TOS. In certain applications it 
> might be used a large number of times in different definitions. Hence we 
> might define:-
> 
> : SQUARED (S n -- n^2)
> (G n^2 is the resultant of the number n multiplied by itself. )
> (  Limitations: the number n can be no larger than the square )
> (  root of an n^2 which occupies all the bits of a cell).
>    DUP * ;

So which is longer: "DUP *", or "SQUARED"?

I count 4 characters in the first (or 5, if you include whitespace)
and 7 in the second.

And yet, which is more complex cognitively?  If you are an experienced
Forther, you might answer the first, because you've seen it so often
and it becomes second nature to you (plus it uses standard words that
you can trust to do what you expect them to do).  But to anyone not quite
as familiar with a DUP and * combination, some thought must be put into
play as to what occurs on the stack, whereas the very word SQUARED is
self-documenting, and becomes the more readable version.

> There is very little cost in doing this as the call structure of Forth was 
> built in at a very fundamental level and is in common enough usage that 
> this was the first part of Forth to be considered for opitimisation on any 
> real machine. Would you have it that we should just include the DUP * 
> cliche instead of defining and using SQUARED. How much longer does it need 
> to be?

You apparently didn't count your characters very well.  You (and other
Forthers) are advocating combining common cliches into single words,
but when the words are longer than their cliches, do you advocate
going back to the cliche?  In other words, if John Doty was correct
in his advocacy of short definitions to long ones, then DUP * should
win, by his standard.  That certainly wouldn't be my position.  Give
me a longer, more descriptive name to call any day, rather than its
guts that are short.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: John Doty
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <42335739.4080508@whispertel.LoseTheH.net>
Duane Rettig wrote:

> You apparently didn't count your characters very well.  You (and other
> Forthers) are advocating combining common cliches into single words,
> but when the words are longer than their cliches, do you advocate
> going back to the cliche?  In other words, if John Doty was correct
> in his advocacy of short definitions to long ones, then DUP * should
> win, by his standard.

I don't agree that character count is the simplicity measure I'm after. 
I'm not about to change the name of "load" to "込" (although it's 
tempting ;-). Simplicity in programming is psychological, not logical.

I think the 7+-2 rule of psychology applies. Human reductionism tends to 
work by sevens, more or less. Seven symbols is a nice readable size for 
the body of a Forth definition. On the other hand, I won't object to 
dropping common connectives like ·@" from the count.

Of course, in most programming languages it's tough to do anything 
useful in seven symbols. Maybe in C the count should be statements.

This stuff takes judgement, not rigid, dogmatic thinking. In particular, 
you have to pay attention to the thought processes of those who might 
need to read your code. A two symbol definition like Paul's SQUARE is 
2.5 sigma away from 7+-2: it might be considered marginal. Personally, I 
think I'd rather see "DUP *", but other readers might find SQUARE 
easier. Might depend on how much other visual distraction there is in 
the code.

-jpd
From: Bernd Paysan
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <9vccg2-1je.ln1@vimes.paysan.nom>
John Doty wrote:
> This stuff takes judgement, not rigid, dogmatic thinking. In particular,
> you have to pay attention to the thought processes of those who might
> need to read your code. A two symbol definition like Paul's SQUARE is
> 2.5 sigma away from 7+-2: it might be considered marginal. Personally, I
> think I'd rather see "DUP *", but other readers might find SQUARE
> easier. Might depend on how much other visual distraction there is in
> the code.

Try this example:

: distance ( a b -- c )  square swap square + sqrt ;

vs.

: distance ( a b -- c )  dup * swap dup * + sqrt ;

I find the upper version more readable, since it's obvious that the distance
is the square root of the sum of squares.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: John Doty
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4233874F.7030905@whispertel.LoseTheH.net>
Good example! Connect with the math culture, bring out the meaning.

Bernd Paysan wrote:
> John Doty wrote:
> 
>>This stuff takes judgement, not rigid, dogmatic thinking. In particular,
>>you have to pay attention to the thought processes of those who might
>>need to read your code. A two symbol definition like Paul's SQUARE is
>>2.5 sigma away from 7+-2: it might be considered marginal. Personally, I
>>think I'd rather see "DUP *", but other readers might find SQUARE
>>easier. Might depend on how much other visual distraction there is in
>>the code.
> 
> 
> Try this example:
> 
> : distance ( a b -- c )  square swap square + sqrt ;
> 
> vs.
> 
> : distance ( a b -- c )  dup * swap dup * + sqrt ;
> 
> I find the upper version more readable, since it's obvious that the distance
> is the square root of the sum of squares.
> 

Now I know your metric tensor is unity :-)

-jpd
From: Elizabeth D Rather
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <1136sjue9pkir17@news.supernews.com>
"Duane Rettig" <·····@franz.com> wrote in message 
··················@franz.com...
> "Paul E. Bennett" <···@amleth.demon.co.uk> writes:
>
...
> So which is longer: "DUP *", or "SQUARED"?
>
> I count 4 characters in the first (or 5, if you include whitespace)
> and 7 in the second.

So?  Counting characters in the name is utterly unrelated to runtime 
efficiency.  A compiled reference to DUP and * might involve two calls, and 
a reference to SQUARED only one, so the latter would be shorter.  On the 
other hand, some compilers might compile the instructions in line for really 
low-level primitives like DUP and *, so that's maybe not the best example. 
But my point is that the length of the name is really irrelevant to any 
discussion of efficiency.  As others have observed, in an embedded system 
the names might be stripped anyway.

> And yet, which is more complex cognitively?  If you are an experienced
> Forther, you might answer the first, because you've seen it so often
> and it becomes second nature to you (plus it uses standard words that
> you can trust to do what you expect them to do).  But to anyone not quite
> as familiar with a DUP and * combination, some thought must be put into
> play as to what occurs on the stack, whereas the very word SQUARED is
> self-documenting, and becomes the more readable version.

Agreed, that would be a good argument for doing it.  And it also emphasizes 
the fact that making smaller components can enhance readability.

> ...
> You apparently didn't count your characters very well.  You (and other
> Forthers) are advocating combining common cliches into single words,
> but when the words are longer than their cliches, do you advocate
> going back to the cliche?  In other words, if John Doty was correct
> in his advocacy of short definitions to long ones, then DUP * should
> win, by his standard.  That certainly wouldn't be my position.  Give
> me a longer, more descriptive name to call any day, rather than its
> guts that are short.

As various of us have said, the length of the name(s) is irrelevant except 
in source.  To me, counting characters in source is one of the less 
important issues in writing efficient code.  What John is talking about when 
he says "short" is the number of things to be done, not the number of 
characters needed to describe them.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
================================================== 
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4acp8ylnj.fsf@franz.com>
"Elizabeth D Rather" <··········@forth.com> writes:

> "Duane Rettig" <·····@franz.com> wrote in message
> ··················@franz.com...
> 
> > "Paul E. Bennett" <···@amleth.demon.co.uk> writes:
> >
> ...
> > So which is longer: "DUP *", or "SQUARED"?
> >
> > I count 4 characters in the first (or 5, if you include whitespace)
> > and 7 in the second.
> 
> So?  Counting characters in the name is utterly unrelated to runtime
> efficiency.  A compiled reference to DUP and * might involve two
> calls, and a reference to SQUARED only one, so the latter would be
> shorter.  On the other hand, some compilers might compile the
> instructions in line for really low-level primitives like DUP and *,
> so that's maybe not the best example. But my point is that the length
> of the name is really irrelevant to any discussion of efficiency.  As
> others have observed, in an embedded system the names might be
> stripped anyway.

Why are we still talking about runtime efficiency?  I answered one
article about that, and finished by saying that that was not what we
were talking about.  For all of those who answered tangentially, I will
not reply to each post, but let me say here once what my point is,
based on my original conversation with John Doty:

DR: ... the user stack will end up growing or shrinking incorrectly
if there is a mismatch.

JD:  ... write short definitions ...

DR: What if I wanted to write longer definitions?

JD: Why, to make your code unreadable?

Originally, my discussion with John Doty was based on stack
mismatch issues, and nothing else.  John's answer included a
maxim to write short definitions.  But I don't like that maxim,
so I'm asking what I could do in Forth in order not to be
bound by that rule.  The responses I've received (as well as my
experiences) lead me to conclude that if I want to write long
definitions, I should not be using Forth.  That's alright,
though; I already have a language in which I can write long
definitions without loss of cognitive simplicity.

> > And yet, which is more complex cognitively?  If you are an experienced
> > Forther, you might answer the first, because you've seen it so often
> > and it becomes second nature to you (plus it uses standard words that
> > you can trust to do what you expect them to do).  But to anyone not quite
> > as familiar with a DUP and * combination, some thought must be put into
> > play as to what occurs on the stack, whereas the very word SQUARED is
> > self-documenting, and becomes the more readable version.
> 
> Agreed, that would be a good argument for doing it.  And it also
> emphasizes the fact that making smaller components can enhance
> readability.

Correct.  But there are times when a large function can indeed be used
without any loss of readability.  Table-based code is one such example;
in Lisp a case statement might even span many pages, but to break it
up would not necessarily increase the readability or any reusability.

> > You apparently didn't count your characters very well.  You (and other
> > Forthers) are advocating combining common cliches into single words,
> > but when the words are longer than their cliches, do you advocate
> > going back to the cliche?  In other words, if John Doty was correct
> > in his advocacy of short definitions to long ones, then DUP * should
> > win, by his standard.  That certainly wouldn't be my position.  Give
> > me a longer, more descriptive name to call any day, rather than its
> > guts that are short.
> 
> As various of us have said, the length of the name(s) is irrelevant
> except in source.

We're talking precisely and only about source.  Since there are
apparently no runtime checks for stack mismatches, they are not caught
or introduced at compile-time, but instead they become directly dependent
on source only.

>  To me, counting characters in source is one of the
> less important issues in writing efficient code.  What John is talking
> about when he says "short" is the number of things to be done, not the
> number of characters needed to describe them.

Yes, but even this gets away from my question about what to do with these
pesky stack mismatches.  It is one of the reasons why I did not stop
looking for a better language, even after I had found Forth.  At that
time, I was awed by its simplicity and power, including its extensibility.
Now, I work in a language that gives me all that, including stack
safety, and other things as well.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Elizabeth D Rather
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <1138obad4r4dr47@news.supernews.com>
"Duane Rettig" <·····@franz.com> wrote in message 
··················@franz.com...
> "Elizabeth D Rather" <··········@forth.com> writes:
> ...
>> > I count 4 characters in the first (or 5, if you include whitespace)
>> > and 7 in the second.
>>
>> So?  Counting characters in the name is utterly unrelated to runtime
>> efficiency.  A compiled reference to DUP and * might involve two
>> calls, and a reference to SQUARED only one, so the latter would be
>> shorter.  On the other hand, some compilers might compile the
>> instructions in line for really low-level primitives like DUP and *,
>> so that's maybe not the best example. But my point is that the length
>> of the name is really irrelevant to any discussion of efficiency.  As
>> others have observed, in an embedded system the names might be
>> stripped anyway.
>
> Why are we still talking about runtime efficiency?  I answered one
> article about that, and finished by saying that that was not what we
> were talking about.  For all of those who answered tangentially, I will
> not reply to each post, but let me say here once what my point is,
> based on my original conversation with John Doty:
>
> DR: ... the user stack will end up growing or shrinking incorrectly
> if there is a mismatch.
>
> JD:  ... write short definitions ...

What John means here is "short" in the sense of having few things to do, not 
measured in characters.  If there are only a few operations, it's quite easy 
to model the stack behavior and avoid mismatches.

> DR: What if I wanted to write longer definitions?
>
> JD: Why, to make your code unreadable?

Again, what I think John is trying to say is that as definitions become 
longer, it's harder to model them in your head and avoid errors.  It's also 
more difficult to perform the sort of thorough validation that Paul Bennett 
finds necessary (and most of us find highly desirable).

> Originally, my discussion with John Doty was based on stack
> mismatch issues, and nothing else.  John's answer included a
> maxim to write short definitions.  But I don't like that maxim,
> so I'm asking what I could do in Forth in order not to be
> bound by that rule.  The responses I've received (as well as my
> experiences) lead me to conclude that if I want to write long
> definitions, I should not be using Forth.  That's alright,
> though; I already have a language in which I can write long
> definitions without loss of cognitive simplicity.

Learning to manage the stack in Forth is a lot like learning to ride a 
bicycle.  In early learning stages, it's wobbly and difficult, and one has 
an intense desire for training wheels.  At some point, though, a switch is 
thrown in the head and the rider "knows" (although it isn't a conscious, 
thinking process) how to stay upright.  Similarly, in Forth, one learns 
vocabulary not only in terms of what each word does, but what its stack 
behavior is.  A major part of my professional life has been teaching Forth 
courses; I see beginners going through a similar process of finding this 
intensely difficult for the first few days, but suddenly they "get it" and 
after that modeling stack behavior becomes natural and easy.  Stack 
mismatches are among the easiest kinds of errors to detect in testing, and 
as one gains experience occurrences become few, so that is why there's 
relatively little pressure to automate the process more than it is already.

> ...
> Correct.  But there are times when a large function can indeed be used
> without any loss of readability.  Table-based code is one such example;
> in Lisp a case statement might even span many pages, but to break it
> up would not necessarily increase the readability or any reusability.

In Forth, for such a situation we'd be more likely to use an actual table of 
execution vectors (pointers to the word to be executed in each case).  That 
certainly wouldn't take pages to set up, and you'd have the advantage of 
being able to test each word individually (whereas it's hard to test the 
selected phrase in a CASE statement).

>> ...
>> As various of us have said, the length of the name(s) is irrelevant
>> except in source.
>
> We're talking precisely and only about source.  Since there are
> apparently no runtime checks for stack mismatches, they are not caught
> or introduced at compile-time, but instead they become directly dependent
> on source only.

If you're examining source, what's important is the number of words and the 
clarity of their names, not the number of characters.

> ..
> Yes, but even this gets away from my question about what to do with these
> pesky stack mismatches.  It is one of the reasons why I did not stop
> looking for a better language, even after I had found Forth.  At that
> time, I was awed by its simplicity and power, including its extensibility.
> Now, I work in a language that gives me all that, including stack
> safety, and other things as well.

Believe me, those of us who write Forth for a living do everything possible 
to make our task simpler and our code more reliable.  If we felt that 
automated stack analysis tools would achieve this, we would certainly have 
them.  The fact is that most of us regard such things as training wheels on 
a bicycle, that decrease the bike's efficiency and postpone the kind of 
learning you'd get by doing without.  It doesn't take long to learn good 
stack skills in Forth if you focus on it that way: typically about 3 days of 
full-time work, in my experience.  Thereafter, it's a non-problem compared 
to the real issues in programming, namely, learning what the application 
really requires and how to meet those requirements.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
================================================== 
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <464zuy1mk.fsf@franz.com>
"Elizabeth D Rather" <··········@forth.com> writes:

> "Duane Rettig" <·····@franz.com> wrote in message
> ··················@franz.com...
> 
> > "Elizabeth D Rather" <··········@forth.com> writes:
> > ...
> >> > I count 4 characters in the first (or 5, if you include whitespace)
> >> > and 7 in the second.
> >>
> >> So?  Counting characters in the name is utterly unrelated to runtime
> >> efficiency.  A compiled reference to DUP and * might involve two
> >> calls, and a reference to SQUARED only one, so the latter would be
> >> shorter.  On the other hand, some compilers might compile the
> >> instructions in line for really low-level primitives like DUP and *,
> >> so that's maybe not the best example. But my point is that the length
> >> of the name is really irrelevant to any discussion of efficiency.  As
> >> others have observed, in an embedded system the names might be
> >> stripped anyway.
> >
> > Why are we still talking about runtime efficiency?  I answered one
> > article about that, and finished by saying that that was not what we
> > were talking about.  For all of those who answered tangentially, I will
> > not reply to each post, but let me say here once what my point is,
> > based on my original conversation with John Doty:
> >
> > DR: ... the user stack will end up growing or shrinking incorrectly
> > if there is a mismatch.
> >
> > JD:  ... write short definitions ...
> 
> What John means here is "short" in the sense of having few things to
> do, not measured in characters.  If there are only a few operations,
> it's quite easy to model the stack behavior and avoid mismatches.
> 
> 
> > DR: What if I wanted to write longer definitions?
> >
> > JD: Why, to make your code unreadable?
> 
> Again, what I think John is trying to say is that as definitions
> become longer, it's harder to model them in your head and avoid
> errors.  It's also more difficult to perform the sort of thorough
> validation that Paul Bennett finds necessary (and most of us find
> highly desirable).
> 
> 
> > Originally, my discussion with John Doty was based on stack
> > mismatch issues, and nothing else.  John's answer included a
> > maxim to write short definitions.  But I don't like that maxim,
> > so I'm asking what I could do in Forth in order not to be
> > bound by that rule.  The responses I've received (as well as my
> > experiences) lead me to conclude that if I want to write long
> > definitions, I should not be using Forth.  That's alright,
> > though; I already have a language in which I can write long
> > definitions without loss of cognitive simplicity.
> 
> Learning to manage the stack in Forth is a lot like learning to ride a
> bicycle.  In early learning stages, it's wobbly and difficult, and one
> has an intense desire for training wheels.  At some point, though, a
> switch is thrown in the head and the rider "knows" (although it isn't
> a conscious, thinking process) how to stay upright.  Similarly, in
> Forth, one learns vocabulary not only in terms of what each word does,
> but what its stack behavior is.  A major part of my professional life
> has been teaching Forth courses; I see beginners going through a
> similar process of finding this intensely difficult for the first few
> days, but suddenly they "get it" and after that modeling stack
> behavior becomes natural and easy.  Stack mismatches are among the
> easiest kinds of errors to detect in testing, and as one gains
> experience occurrences become few, so that is why there's relatively
> little pressure to automate the process more than it is already.

But if in the process they start equating the largeness (i.e. how much
of a definiton one can hold in one's head) with bugginess, then it
becomes automatic to consider a large definition hard to understand
precisely _because_ it tends to be more buggy.  If the connection
between size and bugginess were able to be broken, then the amount
of code that could be considered at once lexically becomes larger,
and the "height" [1] of the domain-specific functionality can be
achieved with fewer cognitive levels.

[1] I'm coining a term here to measure how high-level or
low-level a language is - for both Forth and Lisp, which are both
extensible, "height" is variable, from the most fundamental of low-level
operators to the heights of the domain-specific problem to be solved.

> > Correct.  But there are times when a large function can indeed be used
> > without any loss of readability.  Table-based code is one such example;
> > in Lisp a case statement might even span many pages, but to break it
> > up would not necessarily increase the readability or any reusability.
> 
> In Forth, for such a situation we'd be more likely to use an actual
> table of execution vectors (pointers to the word to be executed in
> each case).  That certainly wouldn't take pages to set up, and you'd
> have the advantage of being able to test each word individually
> (whereas it's hard to test the selected phrase in a CASE statement).

Yes, but as I asked in response to a similar suggestion, how would
you then judge the word that actually contains the case statement
(i.e. the setup of hundreds of cases which will invoke the case-specific
words)?  It might get long enough to span several screens, but
couldn't we call it cognitively "short" anyway?  When I asked what
constituted "short", you mentioned 3 to 5 words, with up to 10 lines
(which you described as unusual).  What say you now?

> >> As various of us have said, the length of the name(s) is irrelevant
> >> except in source.
> >
> > We're talking precisely and only about source.  Since there are
> > apparently no runtime checks for stack mismatches, they are not caught
> > or introduced at compile-time, but instead they become directly dependent
> > on source only.
> 
> If you're examining source, what's important is the number of words
> and the clarity of their names, not the number of characters.

I explored in another article the possibility that it is really the
complexity of the stack manipulation that might _really_ contribute
to this etherial "length" quality, and not numbers of words or
characters.  Is this a possibility?

> > Yes, but even this gets away from my question about what to do with these
> > pesky stack mismatches.  It is one of the reasons why I did not stop
> > looking for a better language, even after I had found Forth.  At that
> > time, I was awed by its simplicity and power, including its extensibility.
> > Now, I work in a language that gives me all that, including stack
> > safety, and other things as well.
> 
> Believe me, those of us who write Forth for a living do everything
> possible to make our task simpler and our code more reliable.  If we
> felt that automated stack analysis tools would achieve this, we would
> certainly have them.  The fact is that most of us regard such things
> as training wheels on a bicycle, that decrease the bike's efficiency
> and postpone the kind of learning you'd get by doing without.  It
> doesn't take long to learn good stack skills in Forth if you focus on
> it that way: typically about 3 days of full-time work, in my
> experience.  Thereafter, it's a non-problem compared to the real
> issues in programming, namely, learning what the application really
> requires and how to meet those requirements.

I had some paragraphs written to take the analogy of training
wheels farther, but thought the better of it.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Elizabeth D Rather
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <113bqh8j6l62c07@news.supernews.com>
"Duane Rettig" <·····@franz.com> wrote in message 
··················@franz.com...
> "Elizabeth D Rather" <··········@forth.com> writes:
>
>> ... Stack mismatches are among the
>> easiest kinds of errors to detect in testing, and as one gains
>> experience occurrences become few, so that is why there's relatively
>> little pressure to automate the process more than it is already.
>
> But if in the process they start equating the largeness (i.e. how much
> of a definiton one can hold in one's head) with bugginess, then it
> becomes automatic to consider a large definition hard to understand
> precisely _because_ it tends to be more buggy.  If the connection
> between size and bugginess were able to be broken, then the amount
> of code that could be considered at once lexically becomes larger,
> and the "height" [1] of the domain-specific functionality can be
> achieved with fewer cognitive levels.

I believe there *is* a connection between the number of things a definition 
has to do and "bugginess".  I base that largely on experience, but have some 
good analytical backup in George Miller's classical paper, "The Magical 
Number Seven, Plus or Minus Two", found here 
http://www.well.com/user/smalin/miller.html (and other places, no doubt). 
Miller contends that humans can easily mentally model up to around seven, 
but beyond that they tend to lose focus.  Although I'm as leery of rigid 
dogma as you are, I have found it an effective guideline to say a definition 
shouldn't try to do more than about 7 things (and often less).  I'm talking 
about conceptional things here (read a value; select a string; compare two 
items; etc.), not necessarily words (and certainly not letters).

>> > Correct.  But there are times when a large function can indeed be used
>> > without any loss of readability.  Table-based code is one such example;
>> > in Lisp a case statement might even span many pages, but to break it
>> > up would not necessarily increase the readability or any reusability.
>>
>> In Forth, for such a situation we'd be more likely to use an actual
>> table of execution vectors (pointers to the word to be executed in
>> each case).  That certainly wouldn't take pages to set up, and you'd
>> have the advantage of being able to test each word individually
>> (whereas it's hard to test the selected phrase in a CASE statement).
>
> Yes, but as I asked in response to a similar suggestion, how would
> you then judge the word that actually contains the case statement
> (i.e. the setup of hundreds of cases which will invoke the case-specific
> words)?  It might get long enough to span several screens, but
> couldn't we call it cognitively "short" anyway?  When I asked what
> constituted "short", you mentioned 3 to 5 words, with up to 10 lines
> (which you described as unusual).  What say you now?

ANS Forth does include a CASE statement, but I would never use it for 
"hundreds" of cases!  I saw Alain's example posted above, and find it 
appalling.  In Forth we would use other structures to simplify management of 
a large selection table; Anton Ertl's suggestions are representative.

Certainly a definition with a CASE in it will be longer than the few lines I 
mentioned, but one must allow for a certain amount of variance; that would 
be at the longish end of the spectrum.  Words that set up Windows calls also 
tend to have a lot of lines, although what they're doing is fairly simple 
Incidentally, most Forths keep source in OS files rather than "screens" 
nowadays.

>> ...
>> If you're examining source, what's important is the number of words
>> and the clarity of their names, not the number of characters.
>
> I explored in another article the possibility that it is really the
> complexity of the stack manipulation that might _really_ contribute
> to this etherial "length" quality, and not numbers of words or
> characters.  Is this a possibility?

As the number of things a definition does grows, it's certainly possible 
that modeling the stack becomes more difficult, but in my experience that's 
not the problem so much as just the overall complexity.  We have done a few 
studies on very complex applications, and find that the stack depth rarely 
gets above 6-8 items, and a particular word rarely has to deal with more 
than a few.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
================================================== 
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39m1v4F60k06uU1@individual.net>
Duane Rettig wrote:
>>In Forth, for such a situation we'd be more likely to use an actual
>>table of execution vectors (pointers to the word to be executed in
>>each case).  That certainly wouldn't take pages to set up, and you'd
>>have the advantage of being able to test each word individually
>>(whereas it's hard to test the selected phrase in a CASE statement).
> 
> 
> Yes, but as I asked in response to a similar suggestion, how would
> you then judge the word that actually contains the case statement
> (i.e. the setup of hundreds of cases which will invoke the case-specific
> words)?  It might get long enough to span several screens, but
> couldn't we call it cognitively "short" anyway?  When I asked what
> constituted "short", you mentioned 3 to 5 words, with up to 10 lines
> (which you described as unusual).  What say you now?

I think such a word might be ok, but it could be more readable.  Most 
Forthers would probably look away in disgust.  The data-directed 
approach lets you define definition-words which build a (simple) data 
structure.  At runtime a kind of interpreter interprets the data 
structure (this is more efficient than it sounds :D).  If you want 
everything inlined, use macros, but not huge cases.
From: Paul E. Bennett
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <d11gl6$a6h$1$830fa795@news.demon.co.uk>
Duane Rettig wrote:

> Why are we still talking about runtime efficiency?  I answered one
> article about that, and finished by saying that that was not what we
> were talking about.  For all of those who answered tangentially, I will
> not reply to each post, but let me say here once what my point is,
> based on my original conversation with John Doty:

I would have thought that run time efficiency is a desirable facet of any 
software. It can save us from having to use a faster processor when the one 
we have to hand is getting near its maximum.
 
> DR: ... the user stack will end up growing or shrinking incorrectly
> if there is a mismatch.
> 
> JD:  ... write short definitions ...
> 
> DR: What if I wanted to write longer definitions?
> 
> JD: Why, to make your code unreadable?
> 
> Originally, my discussion with John Doty was based on stack
> mismatch issues, and nothing else.  John's answer included a
> maxim to write short definitions.  But I don't like that maxim,
> so I'm asking what I could do in Forth in order not to be
> bound by that rule.  The responses I've received (as well as my
> experiences) lead me to conclude that if I want to write long
> definitions, I should not be using Forth.  That's alright,
> though; I already have a language in which I can write long
> definitions without loss of cognitive simplicity.

There are techniques used, when writing Forth source, that enable the easy 
erradication of stack mis-matching. I always include the glossary 
definition and stack comments in with the source code so that I have the 
specification for the word at hand when editing the source. Having these 
additional bits of information in the source, I can test the word 
thoroughly for its stack effects and for its limitations, which leads to 
the ability to certify the code almost as though it were a hardware 
component. By keeping the definitions short this becomes an almost trivial 
task. The longer the definition, the more complex the internal logic of the 
word then the more difficult a task it is to offer such certain 
certification.
 
> We're talking precisely and only about source.  Since there are
> apparently no runtime checks for stack mismatches, they are not caught
> or introduced at compile-time, but instead they become directly dependent
> on source only.

As I have recently been reminding the Ada vs. C++ combatants (in 
comp.realtime) the development process is more important than the language 
used. I consider that any decent development process should be easy to keep 
in mind and applied with rigour (although not be so rigid as to be 
unworkable). My own process does not need the myriad of tools that other 
language advocates seem to imply are necessary. It is mostly down to 
frequent review (which includes static checking, unit testing, system 
testing and thorough going stressing of the system to ensure that it does 
not break in unexpected ways under overloaded conditions.

> Yes, but even this gets away from my question about what to do with these
> pesky stack mismatches.

The only person to blame for a stack mismatch is the coder. I would expect 
anyone coding in Forth to have been able to test out what he has written 
immediately he has written it. By that way we should always know when a 
word is not treating the stack properly and the testing regime for that has 
been aired here and in other language fora. There should be no excuse for a 
stack mis-match in Forth as it is a fundamental mechanism that we are using 
all the time and should be fully comfortable in dealing with.

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4hdjey5sr.fsf@franz.com>
"Paul E. Bennett" <···@amleth.demon.co.uk> writes:

> Duane Rettig wrote:
> 
> > Why are we still talking about runtime efficiency?  I answered one
> > article about that, and finished by saying that that was not what we
> > were talking about.  For all of those who answered tangentially, I will
> > not reply to each post, but let me say here once what my point is,
> > based on my original conversation with John Doty:
> 
> I would have thought that run time efficiency is a desirable facet of any 
> software. It can save us from having to use a faster processor when the one 
> we have to hand is getting near its maximum.

Let's just agree to agree on this one, ok? :-)  I don't know why
it keeps coming up; there's no difference between Forth and Lisp
in this respect, and I'm trying to contrast, not compare, because
I'm frankly not interested in discussing areas where the languages
(and their cultures) already are in sync.  But OK, although I've
always agreed with it before, I'll just give up and agree with it
now as well :-(

> > DR: ... the user stack will end up growing or shrinking incorrectly
> > if there is a mismatch.
> > 
> > JD:  ... write short definitions ...
> > 
> > DR: What if I wanted to write longer definitions?
> > 
> > JD: Why, to make your code unreadable?
> > 
> > Originally, my discussion with John Doty was based on stack
> > mismatch issues, and nothing else.  John's answer included a
> > maxim to write short definitions.  But I don't like that maxim,
> > so I'm asking what I could do in Forth in order not to be
> > bound by that rule.  The responses I've received (as well as my
> > experiences) lead me to conclude that if I want to write long
> > definitions, I should not be using Forth.  That's alright,
> > though; I already have a language in which I can write long
> > definitions without loss of cognitive simplicity.
> 
> There are techniques used, when writing Forth source, that enable the easy 
> erradication of stack mis-matching. I always include the glossary 
> definition and stack comments in with the source code so that I have the 
> specification for the word at hand when editing the source. Having these 
> additional bits of information in the source, I can test the word 
> thoroughly for its stack effects and for its limitations, which leads to 
> the ability to certify the code almost as though it were a hardware 
> component.

Yes!  This is what I've been asking about.  Do you place such capability
into any development environments?  Do any IDEs exist for Forth, into which
such a tool might be included?

> By keeping the definitions short this becomes an almost trivial 
> task. The longer the definition, the more complex the internal logic of the 
> word then the more difficult a task it is to offer such certain 
> certification.

Why?  If the tool is programmatic and stack usage is deterministic
(even if inconsistent, for different logic paths), why would it be
any harder for such a tool to analyze a larger word?
  
> > We're talking precisely and only about source.  Since there are
> > apparently no runtime checks for stack mismatches, they are not caught
> > or introduced at compile-time, but instead they become directly dependent
> > on source only.
> 
> As I have recently been reminding the Ada vs. C++ combatants (in 
> comp.realtime) the development process is more important than the language 
> used. I consider that any decent development process should be easy to keep 
> in mind and applied with rigour (although not be so rigid as to be 
> unworkable). My own process does not need the myriad of tools that other 
> language advocates seem to imply are necessary. It is mostly down to 
> frequent review (which includes static checking, unit testing, system 
> testing and thorough going stressing of the system to ensure that it does 
> not break in unexpected ways under overloaded conditions.

Yes, these are all good methodologies to use.  However, they are time
intensive, and they tend to thwart efforts to do "quick prototype"
coding.  In many IDEs, including the lisp ones, a function call being
typed automatically shows the argument list for the benefit of the
programmer, for rapid typing requiring no lookups by hand.  A similar
tool would be useful in any language, including Forth.

> > Yes, but even this gets away from my question about what to do with these
> > pesky stack mismatches.
> 
> The only person to blame for a stack mismatch is the coder. I would expect 
> anyone coding in Forth to have been able to test out what he has written 
> immediately he has written it.

How do you reconcile that statement with the statement that others are
making about designing top-down and testing bottom-up?  If you agree with
them, then your statement also seems to imply the requirement for
bottom-up coding.  otherwise, if you design top-down and code a top
level word, you wouldn't be able to test it until the lower-level words
were coded.

> By that way we should always know when a 
> word is not treating the stack properly and the testing regime for that has 
> been aired here and in other language fora. There should be no excuse for a 
> stack mis-match in Forth as it is a fundamental mechanism that we are using 
> all the time and should be fully comfortable in dealing with.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Ed Beroset
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <frYYd.2426$qf2.890@newsread2.news.atl.earthlink.net>
Duane Rettig wrote:
> "Elizabeth D Rather" <··········@forth.com> writes:
> 
> 
>>"Duane Rettig" <·····@franz.com> wrote in message
>>··················@franz.com...
>>>
>>>I count 4 characters in the first (or 5, if you include whitespace)
>>>and 7 in the second.
>>
>>So?  Counting characters in the name is utterly unrelated to runtime
>>efficiency. 
[...]
>>so that's maybe not the best example. But my point is that the length
>>of the name is really irrelevant to any discussion of efficiency.  As
>>others have observed, in an embedded system the names might be
>>stripped anyway.
> 
> 
> Why are we still talking about runtime efficiency?  I answered one
> article about that, and finished by saying that that was not what we
> were talking about.  

The styles of the two languages are very different, and I think that's 
where the confusion lies.  For example, let's look at this exchange:

> For all of those who answered tangentially, I will
> not reply to each post, but let me say here once what my point is,
> based on my original conversation with John Doty:
> 
> DR: ... the user stack will end up growing or shrinking incorrectly
> if there is a mismatch.
> 
> JD:  ... write short definitions ...
> 
> DR: What if I wanted to write longer definitions?
> 
> JD: Why, to make your code unreadable?
> 
> Originally, my discussion with John Doty was based on stack
> mismatch issues, and nothing else.  John's answer included a
> maxim to write short definitions.  But I don't like that maxim,
> so I'm asking what I could do in Forth in order not to be
> bound by that rule.  

If I may supply some of the subtext for the exchange from the Forth 
point of view, it might be this:  "If you are concerned about stack 
mismatches, the way to avoid them in Forth is to use definitions which 
don't have too many things on the stack."  That's what "short 
definitions" implies in this context.  Your question about long 
definitions is kind of strange from the Forther's view.  It's as though 
someone had claimed that language X was a way to write bug-free code and
you'd written, "but what if I wanted to write buggy code?"  It's not 
that the question doesn't make literal sense, but that it's not clear 
why anyone would want to do that.

One presumption about why one MIGHT want to do that was to gain presumed 
efficiency, hence the nature of the replies that you received.

> The responses I've received (as well as my
> experiences) lead me to conclude that if I want to write long
> definitions, I should not be using Forth.  That's alright,
> though; I already have a language in which I can write long
> definitions without loss of cognitive simplicity.

So this leads me to ask, what advantage do you find in writing long 
definitions?  In other words, why would one want long definitions 
instead of compounds of shorter ones?  I find that I tend to use the 
latter method in Lisp, and C and many other languages so that the code 
is easier to understand and maintain; I haven't encountered a compelling 
need to write long definitions.

> Correct.  But there are times when a large function can indeed be used
> without any loss of readability.  Table-based code is one such example;
> in Lisp a case statement might even span many pages, but to break it
> up would not necessarily increase the readability or any reusability.

My Lisp is extremely rusty, but it strikes me that a large case 
statement would tend to have short, repetitive actions for each case. 
That is, you wouldn't have extremely long actions for each case.  Just 
as in Forth, you'd break things up into named function.

>>As various of us have said, the length of the name(s) is irrelevant
>>except in source.
> 
> We're talking precisely and only about source.  Since there are
> apparently no runtime checks for stack mismatches, they are not caught
> or introduced at compile-time, but instead they become directly dependent
> on source only.

That's correct, and since they're in source only, this leads directly to 
the maxim of keeping definitions short, with the understanding that the 
relevant implication isn't the number of characters typed in but the 
complexity and number of stack items for each definition.

> Yes, but even this gets away from my question about what to do with these
> pesky stack mismatches.  

The obvious answer is "avoid them!"  The other respondents have been 
trying to give a more detailed and useful answer, but it boils down to 
just that.

> It is one of the reasons why I did not stop
> looking for a better language, even after I had found Forth.  At that
> time, I was awed by its simplicity and power, including its extensibility.
> Now, I work in a language that gives me all that, including stack
> safety, and other things as well.

As you may recall from looking at Forth, an extremely common idiom is to 
have stack comments next to each word.  Paul Bennett's earlier example:

> : SQUARED (S n -- n^2)
> (G n^2 is the resultant of the number n multiplied by itself. )
> (  Limitations: the number n can be no larger than the square )
> (  root of an n^2 which occupies all the bits of a cell).
>    DUP * ;

As you will recall (but for the benefit of Lispers who might not know), 
the stack comment is the (S n -- n^2) part and it indicates that one 
number n is the input on the stack and one number n^2 is the output. 
You might not know that although these are comment lines, Paul's use of 
(S and (G in the line below are actually intended for machine 
interpretation.  In other words, a Forth program reads those comments 
and can product documentation from them.  I don't know if he uses such a 
tool, but it would certainly be possible to chain stack comments 
together to check for stack mismatches.

He probably doesn't have or use such a tool because the idiomatic use of 
the stack is such a basic part of Forth.  It would be like writing a 
Lisp program to track the use of car and cdr and make sure that the 
correct one was used each time.  It might be possible, but it would be 
useless to any experienced Lisp programmer.

I hope that helps.  Also keep in mind that I'm not, and I don't think 
anybody else here, is trying to convince you to use Forth if you have 
already found that Lisp matches your needs better (or for that matter, 
Lisp if you prefer Forth).  A comparitive look at the features of 
different languages is often interesting, but it's helped considerably 
if we consider them in the context of the idiomatic use of the language. 
  This is similar to the study of human languages -- you probably 
wouldn't  study the Navajo language without also learning something 
about Navajo culture.

Ed
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4acp6y338.fsf@franz.com>
Ed Beroset <·······@mindspring.com> writes:

> If I may supply some of the subtext for the exchange from the Forth
> point of view, it might be this:  "If you are concerned about stack
> mismatches, the way to avoid them in Forth is to use definitions which
> don't have too many things on the stack."  That's what "short
> definitions" implies in this context.

>  Your question about long
> definitions is kind of strange from the Forther's view.

Yes, of course; that's easy for me to see, because you already have
a preconception that there isn't a language for which longer
definitions don't automatically make the function more buggy (as
opposed to writing the same amount of functionality into a greater
quantity of shorter definitions).  Since I know a language for which
a longer definition _can_ be written without increasing the liklihood
of bugs, I can separate the concept of wanting longer definitions
from the concept of wanting buggier code.  Obviously, I don't desire
buggier code.

> It's as
> though someone had claimed that language X was a way to write bug-free
> code and
> you'd written, "but what if I wanted to write buggy code?"

First, I'm assuming that you aren't saying that writing shorter
definitions in Forth guarantees the code to be bug-free - right?
If my assumption is false, then we need to start back a little
farther and find out how this is done.  But I think the assumption
is true, because of what your colleagues are saying about testing,
as well as my experiences with Forth (and of course, common sense).

If it's true that short definitions don't guarantee bug-free code,
then your analogy doen't hold; that's not at all what I'm saying.

  It's not
> that the question doesn't make literal sense, but that it's not clear
> why anyone would want to do that.

Indeed.  Why would one want to write buggy code?  Let's move on.
Do you think I'm saying that?  If so, then you have some assumptions
that need to be adjusted, because I'm obviously not saying that.

> One presumption about why one MIGHT want to do that was to gain
> presumed efficiency, hence the nature of the replies that you received.

In the Lisp world, we tend to look down on efficiency-as-excuse-for-
bugginess.  Of course, there are areas where we desire to compete
with C efficiency, where in order to do so we must model the
hardware-modulo effect (i.e. where adding 1 to the most-positive
machine-integer mysteriously results in the most-negative
machine-integer) but for the most part we try to analyze types
to prove that overflowing to different representations are not
necessary.

> > The responses I've received (as well as my
> > experiences) lead me to conclude that if I want to write long
> > definitions, I should not be using Forth.  That's alright,
> > though; I already have a language in which I can write long
> > definitions without loss of cognitive simplicity.
> 
> So this leads me to ask, what advantage do you find in writing long
> definitions?  In other words, why would one want long definitions
> instead of compounds of shorter ones?  I find that I tend to use the
> latter method in Lisp, and C and many other languages so that the code
> is easier to understand and maintain; I haven't encountered a
> compelling need to write long definitions.

Lexical proximity and modularity.

> > Correct.  But there are times when a large function can indeed be used
> > without any loss of readability.  Table-based code is one such example;
> > in Lisp a case statement might even span many pages, but to break it
> > up would not necessarily increase the readability or any reusability.
> 
> My Lisp is extremely rusty, but it strikes me that a large case
> statement would tend to have short, repetitive actions for each
> case.

Not necessarily, but even if it always did, how do you reconcile
a 1000-entry case statement into a "short" definition?

 That is, you wouldn't have extremely long actions for each case.
> Just as in Forth, you'd break things up into named function.

And how do you describe the function/word that contains the actual
case statement?  Is it short?  (Of course, we'd have to assume that
there would exist a case type of construct in the first place; Elizabeth
Rather has suggested a particular implementation, but is such an
implementation even likely in the Forth world?)

> >>As various of us have said, the length of the name(s) is irrelevant
> >>except in source.
> > We're talking precisely and only about source.  Since there are
> 
> > apparently no runtime checks for stack mismatches, they are not caught
> > or introduced at compile-time, but instead they become directly dependent
> > on source only.
> 
> That's correct, and since they're in source only, this leads directly
> to the maxim of keeping definitions short, with the understanding that
> the relevant implication isn't the number of characters typed in but
> the complexity and number of stack items for each definition.

Right.  I think we're actually getting somewhere.  Perhaps if you
were to analyze precisely what you mean by "short" and "long",
you would conclude that such measurement is really a qualitaative
measurement about the amount of complexity in the stack
interaactions.  Possible?

> > Yes, but even this gets away from my question about what to do with these
> > pesky stack mismatches.
> 
> 
> The obvious answer is "avoid them!"  The other respondents have been
> trying to give a more detailed and useful answer, but it boils down to
> just that.

Heh :-)  I actually do avoid them.  It's been 1984 since my last
Forth program :-)

(but more below... ) 

> > It is one of the reasons why I did not stop
> > looking for a better language, even after I had found Forth.  At that
> > time, I was awed by its simplicity and power, including its extensibility.
> > Now, I work in a language that gives me all that, including stack
> > safety, and other things as well.
> 
> As you may recall from looking at Forth, an extremely common idiom is
> to have stack comments next to each word.  Paul Bennett's earlier
> example:
> 
> 
> > : SQUARED (S n -- n^2)
> > (G n^2 is the resultant of the number n multiplied by itself. )
> > (  Limitations: the number n can be no larger than the square )
> > (  root of an n^2 which occupies all the bits of a cell).
> >    DUP * ;
> 
> As you will recall (but for the benefit of Lispers who might not
> know), the stack comment is the (S n -- n^2) part and it indicates
> that one number n is the input on the stack and one number n^2 is the
> output. You might not know that although these are comment lines,
> Paul's use of (S and (G in the line below are actually intended for
> machine interpretation.  In other words, a Forth program reads those
> comments and can product documentation from them.  I don't know if he
> uses such a tool, but it would certainly be possible to chain stack
> comments together to check for stack mismatches.

Yes, I also avoid these in the same way...

I like my code to document itself.  Why?  Because documentation always
becomes out-of-date - nobody has the perfection of discipline to always
change the docmentation when the function/word is changed.  But when I
run my code through a tool that generates that documentation
automatically, it is only out of date until I regenerate my documentation.  


> He probably doesn't have or use such a tool because the idiomatic use
> of the stack is such a basic part of Forth.  It would be like writing
> a Lisp program to track the use of car and cdr and make sure that the
> correct one was used each time.  It might be possible, but it would be
> useless to any experienced Lisp programmer.

On the contrary; we use these tools all the time (though not to check,
but to actually generate).  Especially when one uses the shortcut
functions that combine a "c" with any numbers of "a" and "d" followed
finally by "r" (as in cadr, cadadr, etc) to follow down the nodes of
trees or other list structures; we often use macros to decide what the
correct branch in the tree is at a particular point, given a higher-level
specification.

> I hope that helps.  Also keep in mind that I'm not, and I don't think
> anybody else here, is trying to convince you to use Forth if you have
> already found that Lisp matches your needs better (or for that matter,
> Lisp if you prefer Forth).  A comparitive look at the features of
> different languages is often interesting, but it's helped considerably
> if we consider them in the context of the idiomatic use of the
> language. This is similar to the study of human languages -- you
> probably wouldn't  study the Navajo language without also learning
> something about Navajo culture.

Yes, of course; I know the Forth of the 80s, but I don't know what
tools people are using to enhance their productivity in Forth
nowadays, and hence my questioning.  Note that I'm also not trying
to "convert" you to Lisp use, but I am trying to get you to think
about how your productivity can be increased, given that you like
to stay in Forth.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Elizabeth D Rather
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <113bquhgqorgs06@news.supernews.com>
"Duane Rettig" <·····@franz.com> wrote in message 
··················@franz.com...
> Ed Beroset <·······@mindspring.com> writes:
> ...
>> So this leads me to ask, what advantage do you find in writing long
>> definitions?  In other words, why would one want long definitions
>> instead of compounds of shorter ones?  I find that I tend to use the
>> latter method in Lisp, and C and many other languages so that the code
>> is easier to understand and maintain; I haven't encountered a
>> compelling need to write long definitions.
>
> Lexical proximity and modularity.

Indeed, one would want to put closely-related definitions nearby (e.g. in 
the same file) so you can see them easily.  When that is impractical (for 
example, you are using a word that was defined in another file for various 
good reasons) most modern Forths have easy ways of displaying it.  In our 
systems, we can say, LOCATE <name> and the source for <name> will be 
displayed.

I have a little trouble understanding how long definitions contribute to 
modularity, it seems the opposite to me.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
================================================== 
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39m1klF63cuabU1@individual.net>
Duane Rettig wrote:
>>So this leads me to ask, what advantage do you find in writing long
>>definitions?  In other words, why would one want long definitions
>>instead of compounds of shorter ones?  I find that I tend to use the
>>latter method in Lisp, and C and many other languages so that the code
>>is easier to understand and maintain; I haven't encountered a
>>compelling need to write long definitions.
> 
> 
> Lexical proximity and modularity.

If all related code is concentrated in one place, that's good, yes. 
That would also include not spreading one function call (with complex 
arg expressions) over more than four lines, I'd say, but instead binding 
everything with "let".  So this leads more and more to an "imperative" 
style (at least people who don't like "let" say that; I like "let").

Still, I sometimes prefer to have extra functions.  As for modularity, I 
feel this is completely orthogonal; you just export whatever functions 
you like from a module.  Short functions are probably easier to test 
individually.

> Not necessarily, but even if it always did, how do you reconcile
> a 1000-entry case statement into a "short" definition?

I've never seen a need for even a 100-entry case.  Wouldn't these cases 
be best generated by macros, or maybe implemented in a data-directed 
style?  Case is useful and convenient for small definitions, though; I 
don't like conds with all those (eq foo 'bla) things in them.

>  That is, you wouldn't have extremely long actions for each case.

Same in Forth, I think.  Isn't the case statement even ANS-Forth?
The contents of even a large case would be rather simple, in Forth 
style.  Probably you would test some of them individually before putting 
them in that huge (case) context.
From: Albert van der Horst
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <idd48e.298@spenarnc.xs4all.nl>
In article <···············@individual.net>,
Ulrich Hobelmann  <···········@web.de> wrote:
>Duane Rettig wrote:
>
>Still, I sometimes prefer to have extra functions.  As for modularity, I
>feel this is completely orthogonal; you just export whatever functions
>you like from a module.  Short functions are probably easier to test
>individually.
>
>> Not necessarily, but even if it always did, how do you reconcile
>> a 1000-entry case statement into a "short" definition?

>
>I've never seen a need for even a 100-entry case.  Wouldn't these cases
>be best generated by macros, or maybe implemented in a data-directed
>style?  Case is useful and convenient for small definitions, though; I
>don't like conds with all those (eq foo 'bla) things in them.

A typical case (!) is an assembler. Other languages could have
a case statement, or a large array of structs.
Forth assemblers have typically a word that handles an opcode.
It can indeed be tested in isolation (or at least without other
opcodes, not without its operands.)

>
>>  That is, you wouldn't have extremely long actions for each case.
>
>Same in Forth, I think.  Isn't the case statement even ANS-Forth?
>The contents of even a large case would be rather simple, in Forth
>style.  Probably you would test some of them individually before putting
>them in that huge (case) context.

Minimalists (like me) consider CASE as another superfluous construct.
It is in CORE EXTENSIONS so you can leave it out, and still keep your
ANSI badge.


Groetjes Albert

--
-- 
Albert van der Horst,Oranjestr 8,3511 RA UTRECHT,THE NETHERLANDS
        One man-hour to invent,
                One man-week to implement,
                        One lawyer-year to patent.
From: Ed Beroset
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <XNfZd.3580$qW.3483@newsread3.news.atl.earthlink.net>
Duane Rettig wrote:
> Ed Beroset <·······@mindspring.com> writes:
> 
>>that the question doesn't make literal sense, but that it's not clear
>>why anyone would want to do that.
> 
> Indeed.  Why would one want to write buggy code?  Let's move on.
> Do you think I'm saying that?  

No, of course not!  It was just an example intended to convey the 
strangeness of your actual question (from a Forth programmer's point of 
view) and nothing more.  Your actual question, as I understood it, was 
"how do I write long definitions in Forth?"  If I've misunderstood you, 
please let me know.

>>One presumption about why one MIGHT want to do that was to gain
>>presumed efficiency, hence the nature of the replies that you received.
> 
> In the Lisp world, we tend to look down on efficiency-as-excuse-for-
> bugginess.  

I think that's a pretty common attitude among programmers in any 
language, and I think it's a good one.

> Of course, there are areas where we desire to compete
> with C efficiency, where in order to do so we must model the
> hardware-modulo effect (i.e. where adding 1 to the most-positive
> machine-integer mysteriously results in the most-negative
> machine-integer) but for the most part we try to analyze types
> to prove that overflowing to different representations are not
> necessary.

That makes sense.

>>So this leads me to ask, what advantage do you find in writing long
>>definitions?  In other words, why would one want long definitions
>>instead of compounds of shorter ones?  I find that I tend to use the
>>latter method in Lisp, and C and many other languages so that the code
>>is easier to understand and maintain; I haven't encountered a
>>compelling need to write long definitions.
> 
> Lexical proximity and modularity.

Let's make sure I understand you correctly.  I think what you're saying 
is that one advantage to writing long definitions is that with a long 
definition, the programmer is reading in just one place to figure out 
what's going on instead of jumping all over in the source code.  Is that 
a fair paraphrase?

>>>Correct.  But there are times when a large function can indeed be used
>>>without any loss of readability.  Table-based code is one such example;
>>>in Lisp a case statement might even span many pages, but to break it
>>>up would not necessarily increase the readability or any reusability.
>>
>>My Lisp is extremely rusty, but it strikes me that a large case
>>statement would tend to have short, repetitive actions for each
>>case.
> 
> Not necessarily, but even if it always did, how do you reconcile
> a 1000-entry case statement into a "short" definition?

It would be a short definition which processes a 1000-entry data 
structure containing test, action tuples.  :-)

>  That is, you wouldn't have extremely long actions for each case.
> 
>>Just as in Forth, you'd break things up into named function.
> 
> 
> And how do you describe the function/word that contains the actual
> case statement?  Is it short?  

Probably.  It's easy to add CASE statement semantics to Forth, and some 
use it.  I don't tend to.  If it's short (three or four possibilities) 
I'll probably just use IF, and if it's long, I'd probably use a vectored 
approach.

> (Of course, we'd have to assume that
> there would exist a case type of construct in the first place; Elizabeth
> Rather has suggested a particular implementation, but is such an
> implementation even likely in the Forth world?)

Very!  It's what I thought of, too, and Ms. Rather and I are probably on 
opposite ends of the Forth programming competency spectrum!  (She's on 
the "extremely competent" end, just to be clear.)

>>>>As various of us have said, the length of the name(s) is irrelevant
>>>>except in source.
>>>
>>>We're talking precisely and only about source.  Since there are
>>
>>>apparently no runtime checks for stack mismatches, they are not caught
>>>or introduced at compile-time, but instead they become directly dependent
>>>on source only.
>>
>>That's correct, and since they're in source only, this leads directly
>>to the maxim of keeping definitions short, with the understanding that
>>the relevant implication isn't the number of characters typed in but
>>the complexity and number of stack items for each definition.
> 
> Right.  I think we're actually getting somewhere.  Perhaps if you
> were to analyze precisely what you mean by "short" and "long",
> you would conclude that such measurement is really a qualitaative
> measurement about the amount of complexity in the stack
> interaactions.  Possible?

Yes, and I'd accept that as a better and more concise definition than 
"short" vs. "long" in this context.  In fact, as others have mentioned, 
excessive stack manipulations are usually a sign that something is 
"long" and needs to be factored into multiple "short" definitions.

>>>Yes, but even this gets away from my question about what to do with these
>>>pesky stack mismatches.
>>
>>The obvious answer is "avoid them!"  The other respondents have been
>>trying to give a more detailed and useful answer, but it boils down to
>>just that.
> 
> Heh :-)  I actually do avoid them.  It's been 1984 since my last
> Forth program :-)

It's like the old joke:
Patient: "Doctor, it hurts when I do this."
Doctor: "Then don't do that."

;-)

>>machine interpretation.  In other words, a Forth program reads those
>>comments and can product documentation from them.  I don't know if he
>>uses such a tool, but it would certainly be possible to chain stack
>>comments together to check for stack mismatches.
> 
> Yes, I also avoid these in the same way...
> 
> I like my code to document itself.  Why?  Because documentation always
> becomes out-of-date - nobody has the perfection of discipline to always
> change the docmentation when the function/word is changed.  

I do the same thing with my assembly language code, my Java code, my C++ 
code, my C code, my Perl code ... etc.  Interestingly, two of those 
languages (Perl and Java) have documentation generation features more or 
less built into the language specification.  Apparently, you and I 
aren't the only people who think it's important!

>>He probably doesn't have or use such a tool because the idiomatic use
>>of the stack is such a basic part of Forth.  It would be like writing
>>a Lisp program to track the use of car and cdr and make sure that the
>>correct one was used each time.  It might be possible, but it would be
>>useless to any experienced Lisp programmer.
> 
> On the contrary; we use these tools all the time (though not to check,
> but to actually generate).  

That's a very different thing.  As someone (Bernd Paysan?) already 
mentioned, the checking tool would be like training wheels on a bicycle. 
   Code generation would be more like building a bicycle factory.  (I'm 
starting to hurt myself with these poor analogies -- your indulgence is 
requested!)

> functions that combine a "c" with any numbers of "a" and "d" followed
> finally by "r" (as in cadr, cadadr, etc) to follow down the nodes of
> trees or other list structures; we often use macros to decide what the
> correct branch in the tree is at a particular point, given a higher-level
> specification.

That's somewhat similar in philosophy, if not mechanics, to how a Forth 
programmer would probably name things.  That is, words that do similar 
things are given similar names.


> Yes, of course; I know the Forth of the 80s, but I don't know what
> tools people are using to enhance their productivity in Forth
> nowadays, and hence my questioning.  

It's funny; I only know the Lisp of the 80s.  Just to forwarn you, my 
opinions on this matter are going to be of limited use because I don't 
do that much programming in Forth.  More experienced people here 
(Elizabeth Rather, Bernd Paysan, Anton Ertl, Paul Bennett and others) 
can give you a much better view into that question than I could.  I'm 
more of an apprentice Forth user; those people are master Forth 
builders.  I only stepped in to try to provide translation services.  :-)

> Note that I'm also not trying
> to "convert" you to Lisp use, but I am trying to get you to think
> about how your productivity can be increased, given that you like
> to stay in Forth.

You'd be surprised, then.  I have some things I want to add to Macsyma, 
which I am happily running on my PDA, and of course the language there 
is Lisp.  (However, I also have Forth on the same PDA).  My motto is 
"use the right tool for the job" and because I'm a naturally lazy 
fellow, I like to have a lot of tools available!

Ed
From: Stephen Pelc
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <42358a5b.432733718@192.168.0.1>
On 14 Mar 2005 01:43:55 -0800, Duane Rettig <·····@franz.com> wrote:

>Yes, of course; I know the Forth of the 80s, but I don't know what
>tools people are using to enhance their productivity in Forth
>nowadays, and hence my questioning.

That's not an unusual statement. Earlier in your post you
mention documentation tools. All of the VFX Forth for
Windows/DOS/Linux manuals are generated directly from the
source code by a tool we call DocGen. DocGen is included
in all versions of VFX Forth, including the free download
from our website.

It is possible to do formal stack checking of Forth. The
mathematics was developed by Jaanus Poial and Bill Stoddart,
and a prototype tool was written by Jaanus. The papers may
found through the EuroForth web site linked at
  http://www.forth.org

Others are actively developing such tools, which are
also applicable to Java. We have clients who want these
tools.

In general, Forth has moved on since the 80s. Just like
all other languages.

Stephen

--
Stephen Pelc, ··········@INVALID.mpeltd.demon.co.uk
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeltd.demon.co.uk - free VFX Forth downloads
From: Anton Ertl
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <2005Mar14.195521@mips.complang.tuwien.ac.at>
Duane Rettig <·····@franz.com> writes:
>And how do you describe the function/word that contains the actual
>case statement?  Is it short?  (Of course, we'd have to assume that
>there would exist a case type of construct in the first place; Elizabeth
>Rather has suggested a particular implementation, but is such an
>implementation even likely in the Forth world?)

The typical way to implement something like a big C switch statement
in Forth is to have a dispatch table and use EXECUTE to call the word
that should actually be executed.

How is the table initialized?  Typically not in a big word, but in
interpreted (compile-time executed) code.  Typically that code is so
simple and stylized that stack depth errors don't happen; e.g., stuff
like:

Create ctrlkeys
    ' false a, ' false a, ' false a, ' false a, 
    ' false a, ' false a, ' false a, ' false a,

    ' (bs)  a, ' false a, ' (ret) a, ' false a, 
    ' false a, ' (ret) a, ' false a, ' false a,

    ' false a, ' false a, ' false a, ' false a, 
    ' false a, ' false a, ' false a, ' false a,

    ' false a, ' false a, ' false a, ' false a, 
    ' false a, ' false a, ' false a, ' false a,

or

' next-line  ctrl N bindkey
' prev-line  ctrl P bindkey
' clear-tib  ctrl K bindkey
' first-pos  ctrl A bindkey
' end-pos    ctrl E bindkey
' (enter)    #lf    bindkey
' (enter)    #cr    bindkey
' tab-expand #tab   bindkey

There is also Eakers case (the ANS Forth CASE ... ENDCASE construct),
but it's more like LISP's cond and no one in his right mind would use
it for 1000-entry case statements.

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.complang.tuwien.ac.at/forth/ansforth/forth200x.html
From: Paul E. Bennett
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <d154i6$ql7$1$830fa795@news.demon.co.uk>
Duane Rettig wrote:

> First, I'm assuming that you aren't saying that writing shorter
> definitions in Forth guarantees the code to be bug-free - right?
> If my assumption is false, then we need to start back a little
> farther and find out how this is done.  But I think the assumption
> is true, because of what your colleagues are saying about testing,
> as well as my experiences with Forth (and of course, common sense).

Even with the shortest definitions there is no guarantee that the code will 
be totally free of bugs but the incidence of bugs existing in freshly 
written functions (words) is much less likely if it is smaller.
 
> If it's true that short definitions don't guarantee bug-free code,
> then your analogy doen't hold; that's not at all what I'm saying.

No guarantees but certainly much less likelyhood.
 
> Not necessarily, but even if it always did, how do you reconcile
> a 1000-entry case statement into a "short" definition?

Most Forthists would probably have worked to eliminate the need for the 
CASE statement in the first place. We have other methods that can prove to 
be very elegant solutions without needing CASE. I can imagine that such a 
monster would probably be resolved with a fsm compiler built for the 
purpose of managing this situation (we are quite adept at writing new 
compilers for specific purposes).
 
> I like my code to document itself.  Why?  Because documentation always
> becomes out-of-date - nobody has the perfection of discipline to always
> change the docmentation when the function/word is changed.  But when I
> run my code through a tool that generates that documentation
> automatically, it is only out of date until I regenerate my documentation.

Which is why I include the immediate code documentation within the source 
files as my example showed. I do it for every word in an application and, 
because it is co-located with the source it describes, keeping it up to 
date is always in mind. Though, I have to admit, by the time the code has 
emerged from its third review/test iteration there is usually very little 
need for any of it to be changed.
 
>> He probably doesn't have or use such a tool because the idiomatic use
>> of the stack is such a basic part of Forth.  It would be like writing
>> a Lisp program to track the use of car and cdr and make sure that the
>> correct one was used each time.  It might be possible, but it would be
>> useless to any experienced Lisp programmer.

I can confirm that I do not need the tool support in development. It would 
be handy to make shorter work of producing the demanded level of user 
documentation when the client requires such. However, I am more often than 
not embedding systems with Forth so deep in that the client doesn't know 
that Forth is involved in the solution. That makes the final documentation 
a bit easier as I don't give such clients any listings of the firmware. To 
them it is a LRU that, if it fails (hardware reasons usually) they just 
replace it from spares.
 
> Yes, of course; I know the Forth of the 80s, but I don't know what
> tools people are using to enhance their productivity in Forth
> nowadays, and hence my questioning.  Note that I'm also not trying
> to "convert" you to Lisp use, but I am trying to get you to think
> about how your productivity can be increased, given that you like
> to stay in Forth.

For me (YMMV) I cover both hardware and software development tasks. Most of 
my time is spent in the documentation and project management aspects. If I 
need productivity improvements it is anywhere but in the Forth environment. 
Some method of reading huge swathes of specs, datasheets and reports much 
faster would be where I would want my productivity improvements.

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: billy
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1111010207.786856.248610@g14g2000cwa.googlegroups.com>
This post is getting too long -- it's probably buggy.

:-)

Duane Rettig wrote:
> Ed Beroset <·······@mindspring.com> writes:

> Yes, of course; that's easy for me to see, because you already have
> a preconception that there isn't a language for which longer
> definitions don't automatically make the function more buggy (as
> opposed to writing the same amount of functionality into a greater
> quantity of shorter definitions).  Since I know a language for which
> a longer definition _can_ be written without increasing the liklihood
> of bugs, I can separate the concept of wanting longer definitions
> from the concept of wanting buggier code.  Obviously, I don't desire
> buggier code.

There is no such language. Increasing the size of a module increases
its complexity superlinearly.

> First, I'm assuming that you aren't saying that writing shorter
> definitions in Forth guarantees the code to be bug-free - right?

Of course not. We're saying that shorter functions are easier to
demonstrate/prove to be correct.

> If it's true that short definitions don't guarantee bug-free code,
> then your analogy doen't hold; that's not at all what I'm saying.

Also not true. We only need to show that shorter functions reduce the
probability of bugs (which they do).

> Not necessarily, but even if it always did, how do you reconcile
> a 1000-entry case statement into a "short" definition?

I would instead build a 1000-entry table, and fill it one entry at a
time (possibly with the help of a defining word). Or I would attempt to
build a less complex algorithm -- 1000 branches in a single point of
control is frankly too many to mange.

> Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/

-Billy
From: Albert van der Horst
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <idhwlo.i55@spenarnc.xs4all.nl>
In article <························@g14g2000cwa.googlegroups.com>,
billy <···········@gmail.com> wrote:
>This post is getting too long -- it's probably buggy.
>
>:-)
>
>Duane Rettig wrote:
>> Ed Beroset <·······@mindspring.com> writes:
>
>> Yes, of course; that's easy for me to see, because you already have
>> a preconception that there isn't a language for which longer
>> definitions don't automatically make the function more buggy (as
>> opposed to writing the same amount of functionality into a greater
>> quantity of shorter definitions).  Since I know a language for which
>> a longer definition _can_ be written without increasing the liklihood
>> of bugs, I can separate the concept of wanting longer definitions
>> from the concept of wanting buggier code.  Obviously, I don't desire
>> buggier code.
>
>There is no such language. Increasing the size of a module increases
>its complexity superlinearly.

Not true. A c program and its assembler equivalent must have the
same complexity, or I would reject your definition of complexity.

The assembler equivalent is typically 5 times as long.

You are in good company in not understanding the issue
why small modules leads to less bugs. I had Less Hatton admit
it in his course Safer C.

Small modules leads less bugs through the intermediate of
smaller programs. In the above example you could say (stylized)
by adding a limited number of assembler modules (each the
equivalent of a c-type statement) you have decreased the
assembler program to a one fifth c-program.

The extreme reduction in program size by the Forth design paradigm of
an application language is what makes Forth relatively bug free. There
are just less lines to look at. It is that simple.

Less Hatton assumes (like you do, implicitly) that the choice
is between a program of ten thousand-line-modules or thousand
ten-line-modules. It is not. With ten line modules your program
would not reach 10.000 lines total by a far stretch.

>
>-Billy
>

Groetjes Albert

--
-- 
Albert van der Horst,Oranjestr 8,3511 RA UTRECHT,THE NETHERLANDS
        One man-hour to invent,
                One man-week to implement,
                        One lawyer-year to patent.
From: billy
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1111090963.619099.179640@l41g2000cwc.googlegroups.com>
Albert van der Horst wrote:
> billy <···········@gmail.com> wrote:
> >Duane Rettig wrote:
> >> Ed Beroset <·······@mindspring.com> writes:

> >> quantity of shorter definitions).  Since I know a language for
which
> >> a longer definition _can_ be written without increasing the
liklihood
> >> of bugs, I can separate the concept of wanting longer definitions
> >> from the concept of wanting buggier code.  Obviously, I don't
desire
> >> buggier code.

> >There is no such language. Increasing the size of a module increases
> >its complexity superlinearly.

> Not true. A c program and its assembler equivalent must have the
> same complexity, or I would reject your definition of complexity.

Your statement is not a denial of mine. Translating a program from one
language to another does not "add lines" to the original program; it
produces a new program in a new language which allegedly does the same
thing. The number of lines is not relevant.

If you wish to measure complexity in lines of code (this is not my
measure of choice, but you seem to want it), you have to have to match
your units. An LOC in C is not the same as an LOC in assembler.

The Lisp function we were shown is an immensely long function,
regardless of LOC or tokens or functionality count or whatever metric
you care to define. It manages to become barely manageable by using
some clever tricks, but I doubt that most people would be able to
correctly write (the first time) such a monster even with all those
tricks; I suspect that the reason that function is used is that someone
automatically generated it, and many people since have taken advantage
of its regular structure to add new things into it, and again because
of its regular structure there's little risk of "rippling changes".

Contrary to the original poster's implication, a similar function in
Forth is quite possible; like the Lisp function, such a Forth function
would rely on indentation and spacing to make it clear where and how to
perform insertions and changes.

> The assembler equivalent is typically 5 times as long.

Although your objection doesn't apply to my statement, I do find it
curious that most people find it harder to maintain assembly programs
than programs in higher level languages. Why, if according to your
ideas the two have the same complexity? I would argue that in fact the
two do NOT have the same complexity; if the assembler program is 5
times longer, then each possible C error locus will correspond to about
5 possible assembler error loci. I could, for example, use a "*"
operator instead of a "+"; but in assembler I would have 5 instructions
whose opcodes I could get wrong, and there would be a large selection
of possible opcodes that would produce no error from the compiler, and
a smaller but still significant selection that would not produce
obvious errors -- but these are just substitution errors; omission and
insertion errors makes the problem worse.

So no, I don't buy the statement that assembly is the same complexity
as C; certainly not for the purposes we're discussing.

> You are in good company in not understanding the issue
> why small modules leads to less bugs. I had Less Hatton admit
> it in his course Safer C.

I don't doubt it :-).

> Small modules leads less bugs through the intermediate of
> smaller programs.

Well sure they do! A small program is a small module, though.

> In the above example you could say (stylized)
> by adding a limited number of assembler modules (each the
> equivalent of a c-type statement) you have decreased the
> assembler program to a one fifth c-program.

Yes, I can see how that's possible. I've done it. It's a matter of
using the additional expressive power of a lower-generation language to
do more precisely what you need done than what a high-level language
can express.

Of course, an extensible language like Forth or Lisp can substitute for
a low-generation language, since they can do low-generation things in a
high-level manner.

> Less Hatton assumes (like you do, implicitly) that the choice
> is between a program of ten thousand-line-modules or thousand
> ten-line-modules. It is not.

I do not make this assumption. I do, however, define a program as a
module (as is a function). A smaller module is easier to analyse than a
larger one.

> With ten line modules your program
> would not reach 10.000 lines total by a far stretch.

I do not make this assumption either, and I deny it. Stupid factoring
will balloon the size of a program as much or more than too little
factoring. The remedy is good design, not blind factoring.

> Groetjes Albert

-Billy
From: Albert van der Horst
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <idjk9x.enj@spenarnc.xs4all.nl>
In article <························@l41g2000cwc.googlegroups.com>,
billy <···········@gmail.com> wrote:
>Albert van der Horst wrote:
>> billy <···········@gmail.com> wrote:
>> >Duane Rettig wrote:
>> >> Ed Beroset <·······@mindspring.com> writes:
>
>> >> quantity of shorter definitions).  Since I know a language for
>which
>> >> a longer definition _can_ be written without increasing the
>liklihood
>> >> of bugs, I can separate the concept of wanting longer definitions
>> >> from the concept of wanting buggier code.  Obviously, I don't
>desire
>> >> buggier code.
>
>> >There is no such language. Increasing the size of a module increases
>> >its complexity superlinearly.
>
>> Not true. A c program and its assembler equivalent must have the
>> same complexity, or I would reject your definition of complexity.
>
>Your statement is not a denial of mine. Translating a program from one
>language to another does not "add lines" to the original program; it
>produces a new program in a new language which allegedly does the same
>thing. The number of lines is not relevant.

Okay.

>
>If you wish to measure complexity in lines of code (this is not my
>measure of choice, but you seem to want it), you have to have to match
>your units. An LOC in C is not the same as an LOC in assembler.

I don't. I argue that a c-program and its equivalent assembler program
are essentially the same program.

>
>The Lisp function we were shown is an immensely long function,
>regardless of LOC or tokens or functionality count or whatever metric
>you care to define. It manages to become barely manageable by using
>some clever tricks, but I doubt that most people would be able to
>correctly write (the first time) such a monster even with all those
>tricks; I suspect that the reason that function is used is that someone
>automatically generated it, and many people since have taken advantage
>of its regular structure to add new things into it, and again because
>of its regular structure there's little risk of "rippling changes".
>
>Contrary to the original poster's implication, a similar function in
>Forth is quite possible; like the Lisp function, such a Forth function
>would rely on indentation and spacing to make it clear where and how to
>perform insertions and changes.
>
>> The assembler equivalent is typically 5 times as long.
>
>Although your objection doesn't apply to my statement, I do find it
>curious that most people find it harder to maintain assembly programs
>than programs in higher level languages. Why, if according to your
>ideas the two have the same complexity? I would argue that in fact the
>two do NOT have the same complexity; if the assembler program is 5
>times longer, then each possible C error locus will correspond to about
>5 possible assembler error loci. I could, for example, use a "*"

This is not complexity. You are arguing that loading 5 wagons of
hay is more complex than loading one wagon.

Maybe a better analogy. Assembler programming is programming with
a small hay stick. You have to lift five times as many.
Not complexity, in any reasonable meaning of the word.
Cumbersome is a better word.

<SNIP>

>
>So no, I don't buy the statement that assembly is the same complexity
>as C; certainly not for the purposes we're discussing.

So I reject your definition. And by implication your conclusions.

>> You are in good company in not understanding the issue
>> why small modules leads to less bugs. I had Less Hatton admit
>> it in his course Safer C.
>
>I don't doubt it :-).
>
>> Small modules leads less bugs through the intermediate of
>> smaller programs.
>
>Well sure they do! A small program is a small module, though.
>
>> In the above example you could say (stylized)
>> by adding a limited number of assembler modules (each the
>> equivalent of a c-type statement) you have decreased the
>> assembler program to a one fifth c-program.
>
>Yes, I can see how that's possible. I've done it. It's a matter of
>using the additional expressive power of a lower-generation language to
>do more precisely what you need done than what a high-level language
>can express.
>
<SNIP>
>
>> Less Hatton assumes (like you do, implicitly) that the choice
>> is between a program of ten thousand-line-modules or thousand
>> ten-line-modules. It is not.
>
>I do not make this assumption. I do, however, define a program as a
>module (as is a function). A smaller module is easier to analyse than a
>larger one.

But nobody is interested in modules! We want function points.
We want a total program that does something for you.
It is up to the programmer to decide how many modules, WOC's,
LOC's or mouse clicks you use up.

What you argue here is, that you could program a function point
with an arbitrary small module. You can't be serious.

>> With ten line modules your program
>> would not reach 10.000 lines total by a far stretch.
>
>I do not make this assumption either, and I deny it. Stupid factoring
>will balloon the size of a program as much or more than too little
>factoring. The remedy is good design, not blind factoring.

Stupid factoring is something rather unheard of, at least in Forth.
I know that people like Marcel Hendrix / gforth sometimes make
"complete sets" of operators, where some may never be used probably.
This may be stupid (depending on your point of view), and adds bloat,
but it is not factoring.

I would be interested in a real or made up example of stupid
factoring.

>-Billy

Groetjes Albert

--
-- 
Albert van der Horst,Oranjestr 8,3511 RA UTRECHT,THE NETHERLANDS
        One man-hour to invent,
                One man-week to implement,
                        One lawyer-year to patent.
From: billy
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1111345511.760616.76470@f14g2000cwb.googlegroups.com>
Albert van der Horst wrote:
> billy <···········@gmail.com> wrote:
> >If you wish to measure complexity in lines of code (this is not my
> >measure of choice, but you seem to want it), you have to have to
match
> >your units. An LOC in C is not the same as an LOC in assembler.

> I don't. I argue that a c-program and its equivalent assembler
program
> are essentially the same program.

You can argue that for some definition of "essentially the same
program". I would hope your arguments would never have any practical
effect, though; if they did, people would be using assembler where C
was appropriate or vice versa. The languages are different and have
different effects on programs written in them.

> >> The assembler equivalent is typically 5 times as long.

> >Although your objection doesn't apply to my statement, I do find it
> >curious that most people find it harder to maintain assembly
programs
> >than programs in higher level languages. Why, if according to your
> >ideas the two have the same complexity? I would argue that in fact
the
> >two do NOT have the same complexity; if the assembler program is 5
> >times longer, then each possible C error locus will correspond to
about
> >5 possible assembler error loci. I could, for example, use a "*"

> This is not complexity. You are arguing that loading 5 wagons of
> hay is more complex than loading one wagon.

No, I am absolutely not arguing that. In fact, I didn't use the word
"hay" or "wagon", and I wouldn't because I think it's a useless
analogy; each shovel of hay is roughly the same as every other shovel,
while each line of code depends critically on many of the lines before
and after it.

I'm using the word "complexity" in the well-defined sense that
information theory uses it. A C statement generates 5 lines of
assembler code; but not every 5 lines of assembler code can be
converted into a single C statement. This reveals that the
informational content of 5 lines of assembler is greater than the
informational content of 1 line of C.

> Maybe a better analogy. Assembler programming is programming with
> a small hay stick. You have to lift five times as many.
> Not complexity, in any reasonable meaning of the word.
> Cumbersome is a better word.

*If* this were the only difference, nobody would ever use assembly,
just as nobody pitches less hay than they can repeatably lift. It would
be a complete waste of time.

The reason that assembly is sometimes used is that assembly has more
actual complexity than C. Actually, this is also why people tend to
avoid using assembly unless it's needed. It's not just because it's
cumbersome (although it is); it's also because there are so many more
possible mistakes to make.

> >> Less Hatton assumes (like you do, implicitly) that the choice
> >> is between a program of ten thousand-line-modules or thousand
> >> ten-line-modules. It is not.

> >I do not make this assumption. I do, however, define a program as a
> >module (as is a function). A smaller module is easier to analyse
than a
> >larger one.

> But nobody is interested in modules! We want function points.
> We want a total program that does something for you.
> It is up to the programmer to decide how many modules, WOC's,
> LOC's or mouse clicks you use up.

> What you argue here is, that you could program a function point
> with an arbitrary small module. You can't be serious.

I don't argue this, and I don't understand why you're saying that I do.
I do notice that you keep putting absurd words in my mouth, though; you
just got through telling me that I was arguing about haystacking, a
topic of which I have only the most indirect knowledge and which I'm
almost certain I made no allusion to. Now you're telling me that I was
arguing about function points.

Please stop telling me that I'm arguing about something I never
mentioned.

> >> With ten line modules your program
> >> would not reach 10.000 lines total by a far stretch.
> >
> >I do not make this assumption either, and I deny it. Stupid
factoring
> >will balloon the size of a program as much or more than too little
> >factoring. The remedy is good design, not blind factoring.
>
> Stupid factoring is something rather unheard of, at least in Forth.

It's clear that you don't know what I meant when I wrote "stupid
factoring". I apologise for not defining my terms.

> I know that people like Marcel Hendrix / gforth sometimes make
> "complete sets" of operators, where some may never be used probably.
> This may be stupid (depending on your point of view), and adds bloat,
> but it is not factoring.

I agree.

> I would be interested in a real or made up example of stupid
> factoring.

In general, factoring is 'stupid' if the program's code is forced to
say the same thing several different times when once would have been
sufficient, or if it has to compute the same result multiple times when
once would have been sufficient.

The result of "stupid factoring" is very big or very slow code, or
both.

Stupid factoring can build a 10,000 line program out of 10-line modules
(contrary to your statement that a 10,000 line program could never be
built from 10-line modules).

> Groetjes Albert

-Billy
From: Albert van der Horst
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <idp5b2.580@spenarnc.xs4all.nl>
In article <·······················@f14g2000cwb.googlegroups.com>,
billy <···········@gmail.com> wrote:
<SNIP>
>
>You can argue that for some definition of "essentially the same
>program". I would hope your arguments would never have any practical
>effect, though; if they did, people would be using assembler where C
>was appropriate or vice versa. The languages are different and have
>different effects on programs written in them.

If you are using GCC, each program is converted to assembly first,
representing the same program.
This is a statement of fact. I wonder why you are worried of
"practical effects" of me stating it. Is it something like the
theory of evolution that should not be taught because it is too
convincing?

<SNIP>

>Stupid factoring can build a 10,000 line program out of 10-line modules
>(contrary to your statement that a 10,000 line program could never be
>built from 10-line modules).

10,000 line programs are rare in Forth. I have't seen any that
are stupid and built of 10-line (let us say 10 Words of Code)
modules.
I challenged you for an example. My "unheard for Forth" still stands.

What I tried to say that if you rebuild a 10K WOC program of 1K WOC
modules, using 10 WOC modules, you do not end up with a 10K WOC
program. It will become much smaller. That causes a drop in
maintainability chores and increases reliability.

Lately Bernd Paysan showed an example of "even better" factoring.
We shouldn't use the term "stupid factoring" for the best efforts
of those less brilliant. "bad or non-optimal factoring" will do.

>-Billy
>

Groetjes Albert

--
-- 
Albert van der Horst,Oranjestr 8,3511 RA UTRECHT,THE NETHERLANDS
        One man-hour to invent,
                One man-week to implement,
                        One lawyer-year to patent.
From: John Doty
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <423F1E25.5000707@whispertel.LoseTheH.net>
Albert van der Horst wrote:
> In article <·······················@f14g2000cwb.googlegroups.com>,
> billy <···········@gmail.com> wrote:
> <SNIP>
> 
>>You can argue that for some definition of "essentially the same
>>program". I would hope your arguments would never have any practical
>>effect, though; if they did, people would be using assembler where C
>>was appropriate or vice versa. The languages are different and have
>>different effects on programs written in them.
> 
> 
> If you are using GCC, each program is converted to assembly first,
> representing the same program.
> This is a statement of fact. I wonder why you are worried of
> "practical effects" of me stating it. Is it something like the
> theory of evolution that should not be taught because it is too
> convincing?

Albert, your arguments generally come down to confusing the abstraction 
with the implementation. C is not assembly language: as Billy says, 
there are assembly language programs that cannot be implemented in C, 
and I would add that on most machines, the number of possible assembly 
implementations of a given algorithm grows much faster with program 
length in assembly: there are more arbitrary choices. Thus, assembly 
language has higher entropy than C.

Nor is a Unix file a collection of blocks: that's a common 
implementation, but not the only one. A TCP stream is not a sequence of 
messages: again, it's *implemented* that way, but the messages are 
invisible to the programmer.

A useful abstraction hides irrelevant implementation details from the 
programmer. A useless obfuscation hides relevant details from the 
programmer. Both are simplifications, but one helps and the other hurts. 
The most serious problem in computer science today is consistent bad 
judgement about what is relevant, and consistent denial that this is 
even an issue.

-jpd
From: Paul E. Bennett
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <d1cooh$ig6$1$8300dec7@news.demon.co.uk>
Albert van der Horst wrote:

> You are in good company in not understanding the issue
> why small modules leads to less bugs. I had Less Hatton admit
> it in his course Safer C.

I know Les Hatton from our mutual membership of the Safety Critical Systems 
Club and have sat through a couple of his dissertations on the topic of 
size and complexity.

He presented a curve showing the distribution of bugs found in C program 
modules according to the number of SLOC in each module. This curve had a 
distinctive hump in the middle range and was quite small at the extreme 
ends. However, one thing I picked up on was that the lowest number of lines 
of code were in the order of 50 SLOC. I did ask if he had data on smaller 
modules than this to which he had to admit he did not. He did, however, 
agree that if the modules had fewer than 50 lines then the likelyhood of 
there being any errrors in the code was going to be fairly small, and would 
probably be smaller than the numbers his graph was showing.

The reason given for the modules that had a very large number of SLOC (and 
I forget the upper bounds on this bit) was that when modules grew to that 
sort of size, development pace slowed down because people would realise 
that they had to take much more care with those huge monsters to keep the 
bugs out in the first place. In between they were less careful with their 
constructions. There are probably some interesting psychological reasons 
for this phenomena.
 
> Small modules leads less bugs through the intermediate of
> smaller programs. In the above example you could say (stylized)
> by adding a limited number of assembler modules (each the
> equivalent of a c-type statement) you have decreased the
> assembler program to a one fifth c-program.

When the modules are trivially small, then it becomes much easier to see 
when there is a bug in the code because you are more likely to have all the 
code for a module under your view. In High Integrity Systems Development 
circles it has been long agreed that if all the programmes were trivially 
small we would probably see no bugs at all (or they would be very rare 
indeed).

> The extreme reduction in program size by the Forth design paradigm of
> an application language is what makes Forth relatively bug free. There
> are just less lines to look at. It is that simple.
> 
> Less Hatton assumes (like you do, implicitly) that the choice
> is between a program of ten thousand-line-modules or thousand
> ten-line-modules. It is not. With ten line modules your program
> would not reach 10.000 lines total by a far stretch.

It is a facet that is not often appreciated. By writing smaller, single 
purpose, functional components, the developer can gain the benefit of the 
code re-use holy grail more readily. These smaller functions are likely to 
become easy to use library code that can be trusted, implicitly, to do the 
right thing as they have always done the right thing on their previous few 
million uses.

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: Christopher C. Stacy
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <uvf7q10ak.fsf@news.dtpq.com>
"Paul E. Bennett" <···@amleth.demon.co.uk> writes:
 
> When the modules are trivially small, then it becomes much easier
> to see when there is a bug in the code because you are more likely to
> have all the code for a module under your view. In High Integrity
> Systems Development circles it has been long agreed that if all the
> programmes were trivially small we would probably see no bugs at all
> (or they would be very rare indeed).

Lisp programmers don't get caught by most of the kinds of bugs that
occur in the small programs that you're talking about (such as pointer
problems, buffer overruns, loop overruns) because the language mitigates
those classes of bugs through features such as object rather than memory
pointer references, automatic storage management, bounds checking, 
and "higher level" abstractions.

If all programs were trivially small, they wouldn't do very much useful.
Aren't the interesting bugs the ones that come from design errors?
From: Alain Picard
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <87r7ii1w53.fsf@memetrics.com>
Ed Beroset <·······@mindspring.com> writes:

> Duane Rettig wrote:
>> Correct.  But there are times when a large function can indeed be used
>> without any loss of readability.  Table-based code is one such example;
>> in Lisp a case statement might even span many pages, but to break it
>> up would not necessarily increase the readability or any reusability.
>
> My Lisp is extremely rusty, but it strikes me that a large case
> statement would tend to have short, repetitive actions for each
> case. That is, you wouldn't have extremely long actions for each case.
> Just as in Forth, you'd break things up into named function.
>

I think Duane is referring to code in the style which follows,
(an excerpt from the SBCL sources).  I must agree with him that,
though very long (385 lines), and extremely unusual for its length,
this function nevertheless shows good style, and maintainability.

I think his complaint is he prefers a language where it is not
overly difficult to (readably) write such functions where necessity
arises.

And yes, in this example, the inlining is done for reasons of
efficiency.  Being able to gain such efficiency without paying
an undue cost in readability is clearly a "Good Thing".

;; From src/reader.lisp, SBCL 0.8.16 sources
;;
  (defun read-token (stream firstchar)
    #!+sb-doc
    "This function is just an fsm that recognizes numbers and symbols."
    ;; Check explicitly whether FIRSTCHAR has an entry for
    ;; NON-TERMINATING in CHARACTER-ATTRIBUTE-TABLE and
    ;; READ-DOT-NUMBER-SYMBOL in CMT. Report an error if these are
    ;; violated. (If we called this, we want something that is a
    ;; legitimate token!) Read in the longest possible string satisfying
    ;; the Backus-Naur form for "unqualified-token". Leave the result in
    ;; the *READ-BUFFER*. Return next char after token (last char read).
    (when *read-suppress*
      (internal-read-extended-token stream firstchar nil)
      (return-from read-token nil))
    (let ((attribute-table (character-attribute-table *readtable*))
          (package-designator nil)
          (colons 0)
          (possibly-rational t)
          (seen-digit-or-expt nil)
          (possibly-float t)
          (was-possibly-float nil)
          (escapes ())
          (seen-multiple-escapes nil))
      (reset-read-buffer)
      (prog ((char firstchar))
        (case (char-class3 char attribute-table)
          (#.+char-attr-constituent-sign+ (go SIGN))
          (#.+char-attr-constituent-digit+ (go LEFTDIGIT))
          (#.+char-attr-constituent-digit-or-expt+
           (setq seen-digit-or-expt t)
           (go LEFTDIGIT))
          (#.+char-attr-constituent-decimal-digit+ (go LEFTDECIMALDIGIT))
          (#.+char-attr-constituent-dot+ (go FRONTDOT))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          ;; can't have eof, whitespace, or terminating macro as first char!
          (t (go SYMBOL)))
       SIGN ; saw "sign"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (go RETURN-SYMBOL))
        (setq possibly-rational t
              possibly-float t)
        (case (char-class3 char attribute-table)
          (#.+char-attr-constituent-digit+ (go LEFTDIGIT))
          (#.+char-attr-constituent-digit-or-expt+
           (setq seen-digit-or-expt t)
           (go LEFTDIGIT))
          (#.+char-attr-constituent-decimal-digit+ (go LEFTDECIMALDIGIT))
          (#.+char-attr-constituent-dot+ (go SIGNDOT))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))	
          (#.+char-attr-delimiter+ (unread-char char stream) (go RETURN-SYMBOL))
          (t (go SYMBOL)))
       LEFTDIGIT ; saw "[sign] {digit}+"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (return (make-integer)))
        (setq was-possibly-float possibly-float)
        (case (char-class3 char attribute-table)
          (#.+char-attr-constituent-digit+ (go LEFTDIGIT))
          (#.+char-attr-constituent-decimal-digit+ (if possibly-float
                                                       (go LEFTDECIMALDIGIT)
                                                       (go SYMBOL)))
          (#.+char-attr-constituent-dot+ (if possibly-float
                                             (go MIDDLEDOT)
                                             (go SYMBOL)))
          (#.+char-attr-constituent-digit-or-expt+
           (if (or seen-digit-or-expt (not was-possibly-float))
               (progn (setq seen-digit-or-expt t) (go LEFTDIGIT))
               (progn (setq seen-digit-or-expt t) (go LEFTDIGIT-OR-EXPT))))
          (#.+char-attr-constituent-expt+
           (if was-possibly-float
               (go EXPONENT)
               (go SYMBOL)))
          (#.+char-attr-constituent-slash+ (if possibly-rational
                                               (go RATIO)
                                               (go SYMBOL)))
          (#.+char-attr-delimiter+ (unread-char char stream)
                                   (return (make-integer)))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       LEFTDIGIT-OR-EXPT
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (return (make-integer)))
        (case (char-class3 char attribute-table)
          (#.+char-attr-constituent-digit+ (go LEFTDIGIT))
          (#.+char-attr-constituent-decimal-digit+ (bug "impossible!"))
          (#.+char-attr-constituent-dot+ (go SYMBOL))
          (#.+char-attr-constituent-digit-or-expt+ (go LEFTDIGIT))
          (#.+char-attr-constituent-expt+ (go SYMBOL))
          (#.+char-attr-constituent-sign+ (go EXPTSIGN))
          (#.+char-attr-constituent-slash+ (if possibly-rational
                                               (go RATIO)
                                               (go SYMBOL)))
          (#.+char-attr-delimiter+ (unread-char char stream)
                                   (return (make-integer)))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       LEFTDECIMALDIGIT ; saw "[sign] {decimal-digit}+"
        (aver possibly-float)
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (go RETURN-SYMBOL))
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-digit+ (go LEFTDECIMALDIGIT))
          (#.+char-attr-constituent-dot+ (go MIDDLEDOT))
          (#.+char-attr-constituent-expt+ (go EXPONENT))
          (#.+char-attr-constituent-slash+ (aver (not possibly-rational))
                                           (go SYMBOL))
          (#.+char-attr-delimiter+ (unread-char char stream)
                                   (go RETURN-SYMBOL))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       MIDDLEDOT ; saw "[sign] {digit}+ dot"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (return (let ((*read-base* 10))
                               (make-integer))))
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-digit+ (go RIGHTDIGIT))
          (#.+char-attr-constituent-expt+ (go EXPONENT))
          (#.+char-attr-delimiter+
           (unread-char char stream)
           (return (let ((*read-base* 10))
                     (make-integer))))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       RIGHTDIGIT ; saw "[sign] {decimal-digit}* dot {digit}+"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (return (make-float stream)))
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-digit+ (go RIGHTDIGIT))
          (#.+char-attr-constituent-expt+ (go EXPONENT))
          (#.+char-attr-delimiter+
           (unread-char char stream)
           (return (make-float stream)))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       SIGNDOT ; saw "[sign] dot"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (go RETURN-SYMBOL))
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-digit+ (go RIGHTDIGIT))
          (#.+char-attr-delimiter+ (unread-char char stream) (go RETURN-SYMBOL))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (t (go SYMBOL)))
       FRONTDOT ; saw "dot"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (%reader-error stream "dot context error"))
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-digit+ (go RIGHTDIGIT))
          (#.+char-attr-constituent-dot+ (go DOTS))
          (#.+char-attr-delimiter+  (%reader-error stream "dot context error"))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       EXPONENT
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (go RETURN-SYMBOL))
        (setq possibly-float t)
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-sign+ (go EXPTSIGN))
          (#.+char-attr-constituent-digit+ (go EXPTDIGIT))
          (#.+char-attr-delimiter+ (unread-char char stream) (go RETURN-SYMBOL))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       EXPTSIGN ; got to EXPONENT, and saw a sign character
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (go RETURN-SYMBOL))
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-digit+ (go EXPTDIGIT))
          (#.+char-attr-delimiter+ (unread-char char stream) (go RETURN-SYMBOL))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       EXPTDIGIT ; got to EXPONENT, saw "[sign] {digit}+"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (return (make-float stream)))
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-digit+ (go EXPTDIGIT))
          (#.+char-attr-delimiter+
           (unread-char char stream)
           (return (make-float stream)))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       RATIO ; saw "[sign] {digit}+ slash"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (go RETURN-SYMBOL))
        (case (char-class2 char attribute-table)
          (#.+char-attr-constituent-digit+ (go RATIODIGIT))
          (#.+char-attr-delimiter+ (unread-char char stream) (go RETURN-SYMBOL))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       RATIODIGIT ; saw "[sign] {digit}+ slash {digit}+"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (return (make-ratio stream)))
        (case (char-class2 char attribute-table)
          (#.+char-attr-constituent-digit+ (go RATIODIGIT))
          (#.+char-attr-delimiter+
           (unread-char char stream)
           (return (make-ratio stream)))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       DOTS ; saw "dot {dot}+"
        (ouch-read-buffer char)
        (setq char (read-char stream nil nil))
        (unless char (%reader-error stream "too many dots"))
        (case (char-class char attribute-table)
          (#.+char-attr-constituent-dot+ (go DOTS))
          (#.+char-attr-delimiter+
           (unread-char char stream)
           (%reader-error stream "too many dots"))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
       SYMBOL ; not a dot, dots, or number
        (let ((stream (in-synonym-of stream)))
          (if (ansi-stream-p stream)
              (prepare-for-fast-read-char stream
                (prog ()
                 SYMBOL-LOOP
                 (ouch-read-buffer char)
                 (setq char (fast-read-char nil nil))
                 (unless char (go RETURN-SYMBOL))
                 (case (char-class char attribute-table)
                   (#.+char-attr-escape+ (done-with-fast-read-char)
                                         (go ESCAPE))
                   (#.+char-attr-delimiter+ (done-with-fast-read-char)
                                            (unread-char char stream)
                                            (go RETURN-SYMBOL))
                   (#.+char-attr-multiple-escape+ (done-with-fast-read-char)
                                                  (go MULT-ESCAPE))
                   (#.+char-attr-package-delimiter+ (done-with-fast-read-char)
                                                    (go COLON))
                   (t (go SYMBOL-LOOP)))))
              ;; CLOS stream
              (prog ()
               SYMBOL-LOOP
               (ouch-read-buffer char)
               (setq char (read-char stream nil :eof))
               (when (eq char :eof) (go RETURN-SYMBOL))
               (case (char-class char attribute-table)
                 (#.+char-attr-escape+ (go ESCAPE))
                 (#.+char-attr-delimiter+ (unread-char char stream)
                              (go RETURN-SYMBOL))
                 (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
                 (#.+char-attr-package-delimiter+ (go COLON))
                 (t (go SYMBOL-LOOP))))))
       ESCAPE ; saw an escape
        ;; Don't put the escape in the read buffer.
        ;; READ-NEXT CHAR, put in buffer (no case conversion).
        (let ((nextchar (read-char stream nil nil)))
          (unless nextchar
            (reader-eof-error stream "after escape character"))
          (push *ouch-ptr* escapes)
          (ouch-read-buffer nextchar))
        (setq char (read-char stream nil nil))
        (unless char (go RETURN-SYMBOL))
        (case (char-class char attribute-table)
          (#.+char-attr-delimiter+ (unread-char char stream) (go RETURN-SYMBOL))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
        MULT-ESCAPE
        (setq seen-multiple-escapes t)
        (do ((char (read-char stream t) (read-char stream t)))
            ((multiple-escape-p char))
          (if (escapep char) (setq char (read-char stream t)))
          (push *ouch-ptr* escapes)
          (ouch-read-buffer char))
        (setq char (read-char stream nil nil))
        (unless char (go RETURN-SYMBOL))
        (case (char-class char attribute-table)
          (#.+char-attr-delimiter+ (unread-char char stream) (go RETURN-SYMBOL))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go COLON))
          (t (go SYMBOL)))
        COLON
        (casify-read-buffer escapes)
        (unless (zerop colons)
          (%reader-error stream "too many colons in ~S"
                        (read-buffer-to-string)))
        (setq colons 1)
        (setq package-designator
              (if (plusp *ouch-ptr*)
                  ;; FIXME: It seems inefficient to cons up a package
                  ;; designator string every time we read a symbol with an
                  ;; explicit package prefix. Perhaps we could implement
                  ;; a FIND-PACKAGE* function analogous to INTERN*
                  ;; and friends?
                  (read-buffer-to-string)
                  (if seen-multiple-escapes
                      (read-buffer-to-string)
                      *keyword-package*)))
        (reset-read-buffer)
        (setq escapes ())
        (setq char (read-char stream nil nil))
        (unless char (reader-eof-error stream "after reading a colon"))
        (case (char-class char attribute-table)
          (#.+char-attr-delimiter+
           (unread-char char stream)
           (%reader-error stream
                          "illegal terminating character after a colon: ~S"
                          char))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+ (go INTERN))
          (t (go SYMBOL)))
        INTERN
        (setq colons 2)
        (setq char (read-char stream nil nil))
        (unless char
          (reader-eof-error stream "after reading a colon"))
        (case (char-class char attribute-table)
          (#.+char-attr-delimiter+
           (unread-char char stream)
           (%reader-error stream
                          "illegal terminating character after a colon: ~S"
                          char))
          (#.+char-attr-escape+ (go ESCAPE))
          (#.+char-attr-multiple-escape+ (go MULT-ESCAPE))
          (#.+char-attr-package-delimiter+
           (%reader-error stream
                          "too many colons after ~S name"
                          package-designator))
          (t (go SYMBOL)))
        RETURN-SYMBOL
        (casify-read-buffer escapes)
        (let ((found (if package-designator
                         (find-package package-designator)
                         (sane-package))))
          (unless found
            (error 'reader-package-error :stream stream
                   :format-arguments (list package-designator)
                   :format-control "package ~S not found"))

          (if (or (zerop colons) (= colons 2) (eq found *keyword-package*))
              (return (intern* *read-buffer* *ouch-ptr* found))
              (multiple-value-bind (symbol test)
                  (find-symbol* *read-buffer* *ouch-ptr* found)
                (when (eq test :external) (return symbol))
                (let ((name (read-buffer-to-string)))
                  (with-simple-restart (continue "Use symbol anyway.")
                    (error 'reader-package-error :stream stream
                           :format-arguments (list name (package-name found))
                           :format-control
                           (if test
                               "The symbol ~S is not external in the ~A package."
                               "Symbol ~S not found in the ~A package.")))
                  (return (intern name found)))))))))

-- 
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <·····················@netlabs.com>
From: Anton Ertl
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <2005Mar14.115449@mips.complang.tuwien.ac.at>
Alain Picard <············@memetrics.com> writes:
>I think Duane is referring to code in the style which follows,
>(an excerpt from the SBCL sources).  I must agree with him that,
>though very long (385 lines), and extremely unusual for its length,
>this function nevertheless shows good style, and maintainability.

The example seems to be highly stylized.  If I were to write such a
thing in Forth, I would write a code generator; wait, I already have
(Gray) :-).  The input for this code generator would be smaller (and
more readable).

The stack depth correctness might be completely dependent on the
correctness of the code generator, i.e., concentrates in a few lines
of code (i.e., not a long-definition problem anymore).

Or it might also depend on the stuff used in the input.  In that case,
it may still be possible to insert depth checking code at appropriate
places in the code generator.  And if not (as in Gray), it is the user
of the code generator can still do the usual checking.  In any case,
the stack depth has not been a problem when using Gray.

I would have expected that macros would be used in Lisp for such code. 

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.complang.tuwien.ac.at/forth/ansforth/forth200x.html
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39m06eF60t7buU1@individual.net>
Alain Picard wrote:
> I think Duane is referring to code in the style which follows,
> (an excerpt from the SBCL sources).  I must agree with him that,
> though very long (385 lines), and extremely unusual for its length,
> this function nevertheless shows good style, and maintainability.

I wouldn't call that good style, but that's just me.

> [...] (huge SBCL function)

OMG!  Was this function written by a lexer generator?  If not, maybe it 
should have been!  I can see no reason why anyone would use so many 
labels and gotos for the sake of efficiency.  I have a simple lexer in C 
which just calls aux. functions, *for the sake of readability* (ok, and 
because in C a statement can't be an expression, and the control 
structures suck etc.).  If a function is only called from one site and 
is private to a module it should be inlined anyway, but by the compiler.
From: Juho Snellman
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <slrnd3bsr2.ki.jsnell@sbz-31.cs.Helsinki.FI>
[ Followups set to c.l.l ]

<···········@web.de> wrote:
>> [...] (huge SBCL function)
> 
> OMG!  Was this function written by a lexer generator? If not, maybe it 
> should have been!  I can see no reason why anyone would use so many 
> labels and gotos for the sake of efficiency. 

The code in question was originally written for Spice Lisp, probably
over 20 years ago. The design tradeoffs were different at that time,
and the code would look different if written for a modern Lisp system
in a modern style.

However, the fact that it still survives in SBCL (a Lisp system whose
raison d'etre is maintainability) suggests that the code isn't as
horrid as people seem to think it is. Neither is it particularily
buggy, but that's only to be expected from code that's been part of
several different Lisp systems during three decades... :-)

-- 
Juho Snellman
"Premature profiling is the root of all evil."
From: David Steuber
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <873buxg8km.fsf@david-steuber.com>
Juho Snellman <······@iki.fi> writes:

> However, the fact that it still survives in SBCL (a Lisp system whose
> raison d'etre is maintainability) suggests that the code isn't as
> horrid as people seem to think it is. Neither is it particularily
> buggy, but that's only to be expected from code that's been part of
> several different Lisp systems during three decades... :-)

And not at all too complex to be replaced by something else ;-)



* Don't fix what isn't broken applies.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: Paul E. Bennett
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <d14tf5$bvl$1$8302bc10@news.demon.co.uk>
Alain Picard wrote:

> I think Duane is referring to code in the style which follows,
> (an excerpt from the SBCL sources).  I must agree with him that,
> though very long (385 lines), and extremely unusual for its length,
> this function nevertheless shows good style, and maintainability.

I am not convinced on the good style argument or the readability front. I 
found myself having to be more careful looking this one over.
 
[%X]

> And yes, in this example, the inlining is done for reasons of
> efficiency.  Being able to gain such efficiency without paying
> an undue cost in readability is clearly a "Good Thing".

I went through the code and managed to identify some features of the fsm 
that could probably have been factored out into separate definitions. By 
Forth standards even those segments would have been considered quite long.
 
> ;; From src/reader.lisp, SBCL 0.8.16 sources
> ;;
>   (defun read-token (stream firstchar)
>     #!+sb-doc
>     "This function is just an fsm that recognizes numbers and symbols."
>     ;; Check explicitly whether FIRSTCHAR has an entry for
>     ;; NON-TERMINATING in CHARACTER-ATTRIBUTE-TABLE and
>     ;; READ-DOT-NUMBER-SYMBOL in CMT. Report an error if these are
>     ;; violated. (If we called this, we want something that is a
>     ;; legitimate token!) Read in the longest possible string satisfying
>     ;; the Backus-Naur form for "unqualified-token". Leave the result in
>     ;; the *READ-BUFFER*. Return next char after token (last char read).

At least the above segment is documentary in nature and I can accept that 
more-or-less as-is.

>     (when *read-suppress*

Obviously a chek-point for when this fsm is meant to be running.

>     (let ((attribute-table (character-attribute-table *readtable*))

This is, I take it, the beginning of the assignment portion that runs this 
whole fsm. Below are what seem like the labels that could be considered for 
funcional names of elements of the fsm (about twenty states). In Forth each 
of these would most likely have been a separate word and I wonder why they 
weren't separate sub-functions here.

>        SIGN ; saw "sign"
>        LEFTDIGIT ; saw "[sign] {digit}+"
>        LEFTDIGIT-OR-EXPT
>        LEFTDECIMALDIGIT ; saw "[sign] {decimal-digit}+"
>        MIDDLEDOT ; saw "[sign] {digit}+ dot"
>        RIGHTDIGIT ; saw "[sign] {decimal-digit}* dot {digit}+"
>        SIGNDOT ; saw "[sign] dot"
>        FRONTDOT ; saw "dot"
>        EXPONENT
>        EXPTSIGN ; got to EXPONENT, and saw a sign character
>        EXPTDIGIT ; got to EXPONENT, saw "[sign] {digit}+"
>        RATIO ; saw "[sign] {digit}+ slash"
>        RATIODIGIT ; saw "[sign] {digit}+ slash {digit}+"
>        DOTS ; saw "dot {dot}+"
>        SYMBOL ; not a dot, dots, or number
>        ESCAPE ; saw an escape
>         MULT-ESCAPE
>         COLON
>         INTERN
>         RETURN-SYMBOL

Interestingly, just snipping out what seemed like the underlying portions 
(dross) of the posted code I can already see a clearer view of what the 
routine was trying to extract. That is only because it now all fits in one 
screen-sized view.

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: Elizabeth D Rather
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <113c3k7s8f3btc5@news.supernews.com>
"Paul E. Bennett" <···@amleth.demon.co.uk> wrote in message 
··························@news.demon.co.uk...
> Alain Picard wrote:
>
>> And yes, in this example, the inlining is done for reasons of
>> efficiency.  Being able to gain such efficiency without paying
>> an undue cost in readability is clearly a "Good Thing".

Thank heavens you had the patience to actually read this stuff, I certainly 
don't!

I just want to comment that the phrase "the inlining is done for reasons of 
efficiency" speaks directly to the issue of Forth being designed to miminize 
any penalty for factoring.  The cost in Forth to call something of the size 
of what I guess to be the inlinable sections would be so small as to be 
unmeasurable.  So with that argument gone, there's no reason not to factor 
nice readable chunks such as you appear to suggest.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
================================================== 
From: Pascal Bourguignon
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <87psy4ird4.fsf@thalassa.informatimago.com>
Duane Rettig <·····@franz.com> writes:
> You apparently didn't count your characters very well.  You (and other
> Forthers) are advocating combining common cliches into single words,
> but when the words are longer than their cliches, do you advocate
> going back to the cliche?  In other words, if John Doty was correct
> in his advocacy of short definitions to long ones, then DUP * should
> win, by his standard.  That certainly wouldn't be my position.  Give
> me a longer, more descriptive name to call any day, rather than its
> guts that are short.

There's an additionnal consideration that I think is more important: 

        abstraction.

See SICP. Shortly:  

  (defun i (n)
      (cond ((atom n)  (print n))
            (t         (i (car n))
                       (i (cdr n)))))

vs.:

  (defun i (n)
      (cond ((leafp n) (print n))
            (t         (i (left n))
                       (i (right n)))))

with, for example:

  (defmacro leafp (n) `(atom ,n))
  (defmacro left  (n) `(car ,n))
  (defmacro right (n) `(cdr ,n))


It might be interesting to "rename" even 1 single operator, and to do
it even serveral times in the same program.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Nobody can fix the economy.  Nobody can be trusted with their finger
on the button.  Nobody's perfect.  VOTE FOR NOBODY.
From: Paul E. Bennett
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <d0voad$hsg$1$8302bc10@news.demon.co.uk>
Duane Rettig wrote:

> The following message is a courtesy copy of an article
> that has been posted to comp.lang.lisp,comp.lang.forth as well.
> 
> "Paul E. Bennett" <···@amleth.demon.co.uk> writes:

[%X]
 
>> Take for example the cliche    DUP *
>> 
>> This gives you the square of the number at TOS. In certain applications
>> it might be used a large number of times in different definitions. Hence
>> we might define:-
>> 
>> : SQUARED (S n -- n^2)
>> (G n^2 is the resultant of the number n multiplied by itself. )
>> (  Limitations: the number n can be no larger than the square )
>> (  root of an n^2 which occupies all the bits of a cell).
>>    DUP * ;
> 
> So which is longer: "DUP *", or "SQUARED"?
> 
> I count 4 characters in the first (or 5, if you include whitespace)
> and 7 in the second.

Some of this will, of course, depend on whether or not you leave the name 
header in the final code or not (especially if your development environment 
allows you to keep header space and code space separately - which can 
happen with some cross compilers).
 
> And yet, which is more complex cognitively?  If you are an experienced
> Forther, you might answer the first, because you've seen it so often
> and it becomes second nature to you (plus it uses standard words that
> you can trust to do what you expect them to do).  But to anyone not quite
> as familiar with a DUP and * combination, some thought must be put into
> play as to what occurs on the stack, whereas the very word SQUARED is
> self-documenting, and becomes the more readable version.

I would tend to go with the latter if I meant it to read that way (an yes I 
do include the glossary style commentary in the source as I am often 
certifying my code and doing so enables me to keep my mind on a word's 
specification as I write the code - and yes I mostly write glossary stuff 
first).
 
>> There is very little cost in doing this as the call structure of Forth
>> was built in at a very fundamental level and is in common enough usage
>> that this was the first part of Forth to be considered for opitimisation
>> on any real machine. Would you have it that we should just include the
>> DUP * cliche instead of defining and using SQUARED. How much longer does
>> it need to be?
> 
> You apparently didn't count your characters very well.  You (and other
> Forthers) are advocating combining common cliches into single words,
> but when the words are longer than their cliches, do you advocate
> going back to the cliche?  In other words, if John Doty was correct
> in his advocacy of short definitions to long ones, then DUP * should
> win, by his standard.  That certainly wouldn't be my position.  Give
> me a longer, more descriptive name to call any day, rather than its
> guts that are short.

If the target was so constrained then I would already be looking at 
beheading most of the words that would be hidden from the user interface 
level. That way it is only the code portions that get left. One of the 
Forth's I quite liked in this respect was SplitForth which ran on the BBC 
Micro.

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: Coos Haak
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <cmwouz253wp9.1jrabadfk708t.dlg@40tude.net>
Op 12 Mar 2005 08:04:22 -0800 schreef Duane Rettig:

> "Paul E. Bennett" <···@amleth.demon.co.uk> writes:
> 
>> Duane Rettig wrote:
>> 
<snip>
>>: SQUARED (S n -- n^2)
>> (G n^2 is the resultant of the number n multiplied by itself. )
>> (  Limitations: the number n can be no larger than the square )
>> (  root of an n^2 which occupies all the bits of a cell).
>>    DUP * ;
> 
> So which is longer: "DUP *", or "SQUARED"?
> 
> I count 4 characters in the first (or 5, if you include whitespace)
> and 7 in the second.
> 

This makes no sense, the name SQUARED may be longer, and its definition
requires space, but the calls to it are shorter. Maybe it does not
even make a difference with inline expansion that an advanced compiler
may use. In the interpreter the definition will even be found earlier
because of the structure of the dictionary. So there is a win too.
Applications could remove the headers.

-- 
Coos
CHForth, 16 bit DOS
http://home.hccnet.nl/j.j.haak
From: John Doty
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <42336B6B.7080102@whispertel.LoseTheH.net>
Duane Rettig wrote:

> Not in Lisp.  One can create _and_ _test_ functionality very easily
> from the top down.  It is interesting that you like to program
> top-down; this suggests, if you are then forced to test bottom-up,
> that your testing methodology doesn't match your design methodology.

Why should it? They are different problems.

> Now, you haven't said what your implementation methodology is; I
> suspect that it is also bottom-up, with unit testing close behind,
> which creates a mismatch between the design phase and the
> implementation/test phase.  Or, it might be that you like to
> implement top-down as well, which then creates a mismatch between
> the implementation and the tes phase.  Either way, you are then
> forced to separate your design from your testing in such a manner
> that, on large enough projects, you end up forgetting the details
> of your design by the time your code is ready for testing.

That's a documentation issue, and the software component of my projects 
is only a modest part of it. You *can't* develop and test hardware 
top-down (but I generally design it top down). And a big problem with 
embedded software is that its the low level software that actually makes 
contact with high level system resource requirements (how much is 
computing the result going to drain the batteries?). So you can't afford 
to put off the implementation and test of the critical low level 
components, but you won't know how to design them unless you start from 
the top. Tricky business.

> In Lisp, I can define higher level functions, and I can test them
> right away; I don't even have to define the lower level functions that
> they call (if one is called and is not yet defined, I get an error
> in a prompt, to which I can respond either by defining the function
> and continuing, or by supplying a value tempoaraily in place of
> the function call).

This is more an implementation feature than a language feature. It could 
be done 30 years ago in FORTRAN or PL/I on Multics. Not many found it 
especially useful. Nothing prevents a Forth implementation from doing 
this, but I don't expect many Forthers would want it.

> Yes, I agree with you here, although you hardly know any of my
> prejudices.  Here, we do indeed use Forth-like languages for
> some bootstrap loaders (they are actually supplied for us on the
> operating systems we use), but there are very few other areas where
> Lisp is not capable of doing the job nicely.

How many scientific instruments have you built?

-jpd
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39ha0sF63hgmoU2@individual.net>
John Doty wrote:
> That's a documentation issue, and the software component of my projects 
> is only a modest part of it. You *can't* develop and test hardware 
> top-down (but I generally design it top down). And a big problem with 
> embedded software is that its the low level software that actually makes 
> contact with high level system resource requirements (how much is 
> computing the result going to drain the batteries?). So you can't afford 
> to put off the implementation and test of the critical low level 
> components, but you won't know how to design them unless you start from 
> the top. Tricky business.

Indeed.  It's extremely useful to plan something top-down, but 
development has to go bottom-up since that is what gives you working 
code and might show deficiencies in the high-level design early on.

There's a reason why waterfall-style software engineering sucks.
From: Pascal Bourguignon
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <87is3zks4d.fsf@thalassa.informatimago.com>
John Doty <···@whispertel.LoseTheH.net> writes:

> Duane Rettig wrote:
> 
> > C uses infix, and an implied operator precedence, with parens
> > optional unless they must override precedence.  This is, in my
> > opinion, unnatural for both humans and computers.  I give it
> > a zero.
> 
> The problem with this is that infix notation with precedence is the
> result of centuries of intercultural research and negotiation on the
> best way to communicate mathematical ideas. Its strength is revealing
> the relationships among the elements of an expression in a
> human-readable way. It is, however, less good at expressing procedure.

I think that infix notation is not as natural to mathematician as you
seem to believe.  They actually write their formulas in 2D, not in 1D
as we do.


          (* 2 (/ (^ (sin x) 2) x))


Compare:  2 * sin(x)^2 / x


                  2
               sin (x)
          2 --------------
                  x


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
I need a new toy.
Tail of black dog keeps good time.
Pounce! Good dog! Good dog!
From: John Doty
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4230DFA6.6030008@whispertel.LoseTheH.net>
Pascal Bourguignon wrote:
> John Doty <···@whispertel.LoseTheH.net> writes:
> 
> 
>>Duane Rettig wrote:
>>
>>
>>>C uses infix, and an implied operator precedence, with parens
>>>optional unless they must override precedence.  This is, in my
>>>opinion, unnatural for both humans and computers.  I give it
>>>a zero.
>>
>>The problem with this is that infix notation with precedence is the
>>result of centuries of intercultural research and negotiation on the
>>best way to communicate mathematical ideas. Its strength is revealing
>>the relationships among the elements of an expression in a
>>human-readable way. It is, however, less good at expressing procedure.
> 
> 
> I think that infix notation is not as natural to mathematician as you
> seem to believe.  They actually write their formulas in 2D, not in 1D
> as we do.
> 
> 
>           (* 2 (/ (^ (sin x) 2) x))
> 
> 
> Compare:  2 * sin(x)^2 / x
> 
> 
>                   2
>                sin (x)
>           2 --------------
>                   x

True. I use Mathematica quite a bit in my work: in it you may enter, 
edit, and display expressions in four different forms: Lisp-like 
FullForm, C-like InputForm, unambiguous 2-D StandardForm, and slightly 
ambiguous 2-D TraditionalForm. There are still other forms for output 
only. This is a very nice way to handle these problems: you may freely 
choose the syntax that matches your needs at the moment, e.g. type in 
InputForm, display to a human reader in StandardForm, or examine the 
details of expression structure in FullForm.

Much traditional math notation is, however, linear or nearly so, making 
much less use of one dimension than the other. If you really need a 1-D 
representation, infix is closer to traditional math than others in most 
cases.

-jpd
From: Guy Macon
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <1131on1jjfm9qba@corp.supernews.com>
Pascal Bourguignon wrote:

>I think that infix notation is not as natural to mathematician as you
>seem to believe.  They actually write their formulas in 2D, not in 1D
>as we do.
>
>
>          (* 2 (/ (^ (sin x) 2) x))
>
>
>Compare:  2 * sin(x)^2 / x
>
>
>                  2
>               sin (x)
>          2 --------------
>                  x

Why do 2D problems in a 1D language?  Use Befunge,
the two-dimenr=sional programming language.
From: alex goldman
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <1427741.3e9ftGjpxM@yahoo.com>
Guy Macon <http://www.guymacon.com/> wrote:

> Why do 2D problems in a 1D language?  Use Befunge,
> the two-dimenr=sional programming language.

I wonder what a 3D language can do for my 1D problems. It would probably
squash them like bugs.
From: alex goldman
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <8687065.z94SCGJc1g@yahoo.com>
Duane Rettig wrote: [...]

I always thought Lisp was much, much better than Forth, even though I know
little about Forth. Don't disappoint me :) Don't Forthers ever want to
automate stack handling and catch as many errors as possible at runtime?
Don't they want to name variables / function arguments? Remember what Hal
Abelson said about power over wizards coming from knowing their names. Are
procedures first-class citizens in Forth: can procedures take other
procedures as arguments or return them?
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39ccc3F61jlquU1@individual.net>
alex goldman wrote:
> Duane Rettig wrote: [...]
> 
> I always thought Lisp was much, much better than Forth, even though I know
> little about Forth. Don't disappoint me :) Don't Forthers ever want to
> automate stack handling and catch as many errors as possible at runtime?
> Don't they want to name variables / function arguments? Remember what Hal
> Abelson said about power over wizards coming from knowing their names. Are
> procedures first-class citizens in Forth: can procedures take other
> procedures as arguments or return them?

' bla returns the address (or something like that) in Forth.  A function 
can issue a lisp-like funcall with execute.  I think, you could return 
functions just as well, but they usually wouldn't be closures.
From: Bernd Paysan
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <hdd8g2-815.ln1@miriam.mikron.de>
alex goldman wrote:

> Duane Rettig wrote: [...]
> 
> I always thought Lisp was much, much better than Forth, even though I know
> little about Forth. Don't disappoint me :) Don't Forthers ever want to
> automate stack handling and catch as many errors as possible at runtime?

No, they don't. It's like bikers, who never want more than two wheels, even
though the bike might fall over.

> Don't they want to name variables / function arguments? Remember what Hal
> Abelson said about power over wizards coming from knowing their names.

Arguments are named in the stack comment.

Actually, there are local variables in Forth, but the common sense is to not
use them when you only have a few elements on the stack. And that's the
design goal. So the most common place for local variable use is interfacing
with C libraries, which aren't designed with that goal in mind.

> Are  procedures first-class citizens in Forth: can procedures take other
> procedures as arguments or return them?

Yes, that's possible. With :noname in ANS Forth, you can even create
anonymous procedures on the fly, including currying. Usually, currying is
done with CREATE DOES> though, but that creates a named word (though some
Forth systems have extensions to use a nameless CREATE DOES>).

BTW Infix: The origin of the infix notation seems to be natural languages,
where you have <subject> <verb> <object> (at least in those languages that
have a long math tradition, like Chinese, Indo-European and Arabic). So you
write <number> <operator> <number>. The only difference between Chinese and
Europe I know is the division operator, where here we write "divident
divided by divider", while in China, you write "divider divides
divident" (you have to, no passive mode in Chinese).

So it isn't centuries of consensus, it's millenia of inertia that is
responsible for infix notation. The last century changed everything, and
brought as polish and reverse polish notation, which was coined because
it's a step forward. Natural language is not necessarily a good idea for
abstract formulas.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Duane Rettig
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <4ekemrvqr.fsf@franz.com>
Bernd Paysan <············@gmx.de> writes:

> alex goldman wrote:
> 
> > Duane Rettig wrote: [...]
> > 
> > I always thought Lisp was much, much better than Forth, even though I know
> > little about Forth. Don't disappoint me :) Don't Forthers ever want to
> > automate stack handling and catch as many errors as possible at runtime?

Please note: the attribution may be confusing, but I did not write
this paragraph - Alex did.  I do not think that Lisp is much, much
better than Forth.

Having said that, it is obvious that I think highly of Lisp, having
based my career and livlihood on it.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: billy
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1111007818.335398.71580@o13g2000cwo.googlegroups.com>
alex goldman wrote:
>I always thought Lisp was much, much better than Forth,
>even though I know little about Forth.

I see the problem :-). Actually, Lisp is much, much different from
Forth.

Lisp is a high level language which demonstrates some fundamental
properties of a class of language known as "applicative" (named because
programs consist of arguments applied to functions to produce function
calls). Forth is a low-level language which demonstrates some
fundamental properties of a class of language now known as
"concatenative" (named because programs consist of functions
concatenated together to imply their mathematical composition).

There are huge differences between the two types of language, probably
bigger than between any other two languages you've studied before.

To compare the two, perhaps you'd best start with a language that was
written to explore the biggest difference without diverging in too many
other ways: Joy. See
http://www.latrobe.edu.au/philosophy/phimvt/joy.html for more info.

>Are
>procedures first-class citizens in Forth: can procedures take other
>procedures as arguments or return them?

Forth procedures don't take arguments. They mercilessly crush any
potential dissent, and revoke the citizenship of anything that
questions them.

Yes, they are. Forth source code is also a first-class citizen; every
Forth macro is a reader macro, and reader macros in Forth are a lot
simpler than reader macros in Lisp.

-Billy
From: Duane Rettig
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <4wts7ky8o.fsf@franz.com>
"billy" <···········@gmail.com> writes:

> alex goldman wrote:
> >I always thought Lisp was much, much better than Forth,
> >even though I know little about Forth.
> 
> I see the problem :-). Actually, Lisp is much, much different from
> Forth.

Yes, it is; the latter has five letters in its name :-)
In reality, though, it depends on just how much your "much much"
is.  See below.

> Lisp is a high level language which demonstrates some fundamental
> properties of a class of language known as "applicative" (named because
> programs consist of arguments applied to functions to produce function
> calls).

This is actually not true.  Many people try to describe Lisp as
"the language with all the parens" (which it has) or "that functional
language" (which it supports, but does _not_ enforce), or that AI
language (which, again, it tends to support, but is not geared toward
exclusively).

There is also a major difference between Common Lisp and Scheme in
the aspect that you are observing; Scheme emphasizes functional
composition much more that Common Lisp does, whereas Common Lisp
is more general-purpose, and models a wide range of language concepts.

> Forth is a low-level language which demonstrates some
> fundamental properties of a class of language now known as
> "concatenative" (named because programs consist of functions
> concatenated together to imply their mathematical composition).

Yes, but since Forth has global variables, it is not _pure_
concatenative.

> There are huge differences between the two types of language, probably
> bigger than between any other two languages you've studied before.

It's a gap, but I don't consider them to be a wide gap.  Many times,
small differences in similar or related languages take on an artificial
size and importance, not because these differences _are_ large, but
because the granularity with which you study them.  In the Lisp
community, we have studied and argued differences between Scheme and
Common Lisp to the point where one side questions whether the other
language is a Lisp or not.  And this argument takes place between two
languages that even _look_ the same on paper (of course some will argue
with me about that, but that will prove my larger point).

If nobody has already coined this, I'll call it the "feuding cousins"
phenomenon.

I look at Lisp and Forth as two languages with more in common than
not.  Just a few points of similarity:

 1. They are both extensible.

 2. They both allow modification of functionality on the fly (similar
to extensibility, but with a different purpose).

 3. Both are introspective - tools can examine data and code structure
within the application itself - not restricted to external debuggers
only

 4. ... ?

[There are a lot more, but the ones listed are huge anyway.]

> To compare the two, perhaps you'd best start with a language that was
=====^^^^^^^

I assume you mean "contrast" here...

> written to explore the biggest difference without diverging in too many
> other ways: Joy. See
> http://www.latrobe.edu.au/philosophy/phimvt/joy.html for more info.

An interesting read; at first glance Joy seems to be to Forth as Scheme
is to Common Lisp - the former tries to become more simplified and
purified than its latter counterpart.  I personally have no problem with
Forth being "impure", nor do I have any problem with Common Lisp being
"impure".  Give me practicality over purity any day.

> >Are
> >procedures first-class citizens in Forth: can procedures take other
> >procedures as arguments or return them?
> 
> Forth procedures don't take arguments. They mercilessly crush any
> potential dissent, and revoke the citizenship of anything that
> questions them.
> 
> Yes, they are. Forth source code is also a first-class citizen; every
> Forth macro is a reader macro, and reader macros in Forth are a lot
> simpler than reader macros in Lisp.

Yet another similarity between Forth and Lisp....

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Will Hartung
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <39s368F65kcc0U1@individual.net>
"Duane Rettig" <·····@franz.com> wrote in message
··················@franz.com...
> If nobody has already coined this, I'll call it the "feuding cousins"
> phenomenon.
>
> I look at Lisp and Forth as two languages with more in common than
> not.  Just a few points of similarity:
>
>  1. They are both extensible.
>
>  2. They both allow modification of functionality on the fly (similar
> to extensibility, but with a different purpose).
>
>  3. Both are introspective - tools can examine data and code structure
> within the application itself - not restricted to external debuggers
> only
>
>  4. ... ?

I think the most telling of similarities can be provided by example.

Frank Sargent (sp?) is porting his Pygmy Forth to the ARM.

His Forth has been around quite a while. Very Forthy. Metacompiler. Blocks.
Simple. Neat little system.

His Pygmy Forth for the ARM is written in Common Lisp on top of CLISP which
creates the image to download to the development board.

Note, I'm not bringing this up in a "Lisp > Forth" way. I'm bringing it up
in a "Great minds think alike" vein. Here's a guy who likes, uses,
understands, "gets" Forth, its idioms etc, yet he was still able to find
Lisp comfortable enough to write his new implementation using it.

I thought it was very interesting when I stumbled upon it. There may be hope
for all of us...

Regards,

Will Hartung
(·····@msoft.com)
From: Frank Sergeant
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <87fyyto32o.fsf@bed.utoh.org>
"Will Hartung" <·····@msoft.com> writes:

> I think the most telling of similarities can be provided by example.
> Frank Sargent (sp?) is porting his Pygmy Forth to the ARM.

I usually spell it "Frank Sergeant" but I'm used to lots of variants.

> His Forth has been around quite a while. Very Forthy. Metacompiler. Blocks.
> Simple. Neat little system.

Thank you.  16-bit Pygmy Forth for the x86 has been around since the
late 1980s (1989 perhaps?).  It is based heavily upon Chuck Moore's
cmFORTH but with additions and changes to accomodate the DOS
environment.  It has been well received.  I think this is because it is
small, comes with full source code in Forth, can compile itself
("metacompiling"), is multitasking, and has a built-in editor and
assembler.  It is approachable and easy for someone to try out.
 
> His Pygmy Forth for the ARM is written in Common Lisp on top of CLISP
> which creates the image to download to the development board.

I have implemented Forths for various processors over the years.  My
usual approach was first to write an assembler for the new processor, in
Forth, that would run on Pygmy.  Then, I would write the primitives for
the new processor using that assembler.  Then I would copy the
high-level Forth code for the kernel from Pygmy and use Pygmy's
metacompiler to generate the new Forth.

This time, for the ARM version of Pygmy (Riscy Pygness), I got lazy.  I
didn't *want* to write another assembler.  So, I wrote the Riscy Pygness
primitives using the GNU ARM assembler and debugged them in gdb, using
the GNU ARM simulator.  (All of this under Linux on x86.)  I think, on
the whole, this was a reasonable decision and worked out well for *me*.
However, it may raise the barrier to the adoption of Riscy Pygness by
Forth programmers because of the external assembler.

The previous 32-bit version of Pygmy (for the Hitachi H8/532) was
metacompiled from the 16-bit x86 Pygmy and that got a little awkward.
For the ARM, to make it easy on me, I decided to use Lisp for the
compiling stage.  I used CLISP but might have used CMUCL or even Emacs
Lisp as I was not stressing the capabilities of Lisp.  Again, I am happy
with this choice but worry that it is another barrier to Forthers
wanting to use Riscy Pygness.  It shouldn't be much of a barrier because
a user of the Forth need not touch the Lisp code, just the Forth source
that it compiles.

Whereas anyone could play with Pygmy on DOS, Riscy Pygness has a much
narrower audience: an embedded systems developer who has chosen the ARM,
who likes Forth, who can install the ARM GNU tool chain (if he needs to
change the primitives), who can install a Lisp, and who appreciates the
various aspects of Pygmy, including certain ColorForth similarities.  On
the other hand, it is free (MIT-style license), multitasking, and tiny.
It comes with example code for accessing MMC/SD flash disks via SPI, for
reprogramming on-board flash by the application, for accessing on-chip
peripherals, etc.

It is available at http://pygmy.utoh.org/riscy.  The ARM could well be
the wave of the future (the present, even) for embedded systems, with an
explosion of new, small, cheap ARM chips being offered and/or
announced.  Riscy Pygness runs on a $60 development board from Olimex
(Spark Fun and others are US distributors) and should run on the tiniARM
board by New Micros (I plan to try it any month now).  If someone wants
to beat me to it, I would love to hear the results.

> Note, I'm not bringing this up in a "Lisp > Forth" way. I'm bringing
> it up in a "Great minds think alike" vein. Here's a guy who likes,
> uses, understands, "gets" Forth, its idioms etc, yet he was still able
> to find Lisp comfortable enough to write his new implementation using
> it.

Thank you!  I may print that out and frame it.


-- 
Frank
From: billy
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1111077407.183590.79460@l41g2000cwc.googlegroups.com>
Duane Rettig wrote:
> "billy" <···········@gmail.com> writes:
> > alex goldman wrote:
> > >I always thought Lisp was much, much better than Forth,
> > >even though I know little about Forth.
> > I see the problem :-). Actually, Lisp is much, much different from
> > Forth.

> Yes, it is; the latter has five letters in its name :-)
> In reality, though, it depends on just how much your "much much"
> is.  See below.

This is entirely true. Forth and Lisp invite comparison and
distinction, because of many similar capabilities they share. Actually,
so do all languages, but Forth and Lisp are more fun to compare. :-)

> > Lisp is a high level language which demonstrates some fundamental
> > properties of a class of language known as "applicative" (named
because
> > programs consist of arguments applied to functions to produce
function
> > calls).

> This is actually not true.

Not to interrupt -- I did read your entire message -- but you never
provide the least scrap of support for this statement. How is it not
true that Lisp is an applicative language? Of all of the applicative
languages, Lisp is archetypical in that every construct is designed to
appear like the application of parameters to a function to produce a
function call. Even special forms and macros (which do not form
function calls) nonetheless look like them.

Lisp is designed to parse into a homogenous tree which its
nested-list-processing operators can chomp. In contrast, Forth is
designed to parse into a linear list -- but since the text already
forms a linear list, most Forths don't bother with the parsing step.
(I'm not claiming any superiority here, just establishing a
difference.)

> Many people try to describe Lisp as
> "the language with all the parens" (which it has) or "that functional
> language" (which it supports, but does _not_ enforce), or that AI
> language (which, again, it tends to support, but is not geared toward
> exclusively).

None of these are remotely similar to what I want to communicate.

> There is also a major difference between Common Lisp and Scheme in
> the aspect that you are observing; Scheme emphasizes functional
> composition much more that Common Lisp does, whereas Common Lisp
> is more general-purpose, and models a wide range of language
concepts.

This doesn't appear to me to relate in any way to anything I was
observing. If you thought I was saying that Lisp is a "functional"
language, I miscommunicated.

In fact, it's interesting that the very definition of a "pure
functional language" differs between Lisp-like languages and Forth-like
languages. In Lisp, a pure functional language "wins" by more
accurately modelling lambda theory. In Forth, a pure functional
language "wins" by modelling concatenative theory -- and the two
theories produce entirely different rules and restrictions.

> > Forth is a low-level language which demonstrates some
> > fundamental properties of a class of language now known as
> > "concatenative" (named because programs consist of functions
> > concatenated together to imply their mathematical composition).

> Yes, but since Forth has global variables, it is not _pure_
> concatenative.

Incorrect. Global variables have absolutely nothing to do with
concatenative purity. Forth _is_ impure, but that's because it has
compilation-affecting words, such as IF-ELSE-THEN and POSTPONE. A
correct program written in a purely concatenative language could be
split into two correct programs by cutting it on any token boundary.
You can't in general do that with Forth (although of course you can
always do it by following simple rules, and most of the time you can
just do it).

This is an example of the gap between applicative and concatenative
languages -- mutable variables are mathematical poison for one and
irrelevant for the other; while structured constructs are pleasant for
one and dangerous for the other. (Theoretically speaking, of course; in
practice, we quickly learn to work around the dangers of both. This
leads many people to shun theory, but that's not the right reaction -
the right reaction is to understand the theory and its shortcomings.)

> > There are huge differences between the two types of language,
probably
> > bigger than between any other two languages you've studied before.

> It's a gap, but I don't consider them to be a wide gap.

Nothing wrong with that. :-) There certainly are many similarities.

To me, one of the most interesting differences is that applicative
languages have been extensively studied and are well-grounded by theory
(lambda, currying, and so on); concatenative languages have been
entirely ignored by theoreticians, so strongly so that the term
"concatenative language" was only coined in the past few years.

>  Many times,
> small differences in similar or related languages take on an
artificial
> size and importance, not because these differences _are_ large, but
> because the granularity with which you study them.  In the Lisp
> If nobody has already coined this, I'll call it the "feuding cousins"
> phenomenon.

I like that term.

There's a difference, though. Learning Scheme _and_ Lisp is of limited
marginal utility; learning one gets you most of the benefit of learning
the other, and teaches you most of the same things. But learning Forth
_and_ Lisp is hugely useful (or at least each one teaches many things
that the other one doesn't).

> I look at Lisp and Forth as two languages with more in common than
> not.  Just a few points of similarity:

>  1. They are both extensible.

Yes, because extensibility is good. But how they do it is entirely
different, and results in a different programmer experience.

>  2. They both allow modification of functionality on the fly (similar
> to extensibility, but with a different purpose).

Forth isn't dynamically bound, by the way, if that's what you meant.
Redefining a word doesn't change existing words defined in terms of it.

>  3. Both are introspective - tools can examine data and code
structure
> within the application itself - not restricted to external debuggers
> only

Forth compiled code is not available for introspection, and source code
is only available as a string.

>  4. ... ?
> [There are a lot more, but the ones listed are huge anyway.]

Yes, this is true. Chuck Moore definitely learned from Lisp.

> > To compare the two, perhaps you'd best start with a language that
was
> =====^^^^^^^

> I assume you mean "contrast" here...

Nope. A proper comparison starts with things that are believed to be
fundamentally different. A proper contrast is done between things that
are thought to be fundamentally the same.

> > written to explore the biggest difference without diverging in too
many
> > other ways: Joy. See
> > http://www.latrobe.edu.au/philosophy/phimvt/joy.html for more info.

> An interesting read; at first glance Joy seems to be to Forth as
Scheme
> is to Common Lisp - the former tries to become more simplified and
> purified than its latter counterpart.  I personally have no problem
with
> Forth being "impure", nor do I have any problem with Common Lisp
being
> "impure".  Give me practicality over purity any day.

Oh, I agree.

The author of Joy didn't know anything about Forth (aside from the fact
that it existed); he wasn't attempting to improve it in any way. His
interests were largely theoretical. The fact that his theory described
Forth and Postscript (and manages to explain Postscript's success as
probably the most widely used automatically generated language) is a
bonus.

> > >Are
> > >procedures first-class citizens in Forth: can procedures take
other
> > >procedures as arguments or return them?

> > Yes, they are. Forth source code is also a first-class citizen;
every
> > Forth macro is a reader macro, and reader macros in Forth are a lot
> > simpler than reader macros in Lisp.

> Yet another similarity between Forth and Lisp....

Actually, it's a difference; they share a surface feature, but it's
fundamentally completely different, and the results of using it are
completely different.

For example, I claimed that reader macros are a lot easier to write in
Forth than in Lisp. If this were true, you'd expect to see a lot more
reader macros in Forth than in Lisp. But you don't; Lispers use far
more macros of every type than Forthers do, and Forthers actually
officially discourage macro use of this kind. Why? Because every macro
has to implement a parser, thus creating a redundant bit of code and
isolating the client's syntax from the real syntax of Forth. It's
better to allow the language to do the work for you.

> Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/

-Billy
From: Albert van der Horst
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <idhwqp.i85@spenarnc.xs4all.nl>
In article <·············@franz.com>, Duane Rettig  <·····@franz.com> wrote:

<SNIP>

>
>I look at Lisp and Forth as two languages with more in common than
>not.  Just a few points of similarity:
>
> 1. They are both extensible.
>
> 2. They both allow modification of functionality on the fly (similar
>to extensibility, but with a different purpose).
>
> 3. Both are introspective - tools can examine data and code structure
>within the application itself - not restricted to external debuggers
>only
>
> 4. ... ?

In short, both are Artificial Intelligence languages.
Very few, if any, other languages are.

>Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/

Groetjes Albert
--
-- 
Albert van der Horst,Oranjestr 8,3511 RA UTRECHT,THE NETHERLANDS
        One man-hour to invent,
                One man-week to implement,
                        One lawyer-year to patent.
From: Lars Brinkhoff
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <85acp2p6ym.fsf@junk.nocrew.org>
Duane Rettig <·····@franz.com> writes:
> I look at Lisp and Forth as two languages with more in common than
> not.  Just a few points of similarity:
> 
>  1. They are both extensible.
> 
>  2. They both allow modification of functionality on the fly (similar
> to extensibility, but with a different purpose).
> 
>  3. Both are introspective - tools can examine data and code structure
> within the application itself - not restricted to external debuggers
> only
> 
>  4. ... ?

Maybe:

The whole language is always available.

Can use (almost) any character in an identifier.

ANSI standard in mid-1990'ies.

Learning the language makes you a better programmer, even when using
other languages.

Language for smart programmers.  Great productivity.  Bad programmers
can write really bad code.

Project failures have been blamed on the language.

Outsiders often have negative prejudices about the language.

Documentation can refer to a memory location as a "cell".

Many dialects.

Hackers often amuse themselves by writing their own toy implementation.
From: Darin Johnson
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <82wts4gi31.fsf@usa.net>
"billy" <···········@gmail.com> writes:

> Lisp is a high level language ...

I always liked to consider many Lisps to be low level languages.
They're low level since they're composed of very primitive operations
which aren't hidden from the programmers; only for a machine model
that's different from the typical real-world computer.

-- 
Darin Johnson
    Gravity is a harsh mistress -- The Tick
From: Thomas Pornin
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <d0q0nl$3i0$1@biggoron.nerim.net>
According to Duane Rettig  <·····@franz.com>:
> C uses infix, and an implied operator precedence, with parens
> optional unless they must override precedence.  This is, in my
> opinion, unnatural for both humans and computers.

For the computers, the question of parsing expression with precedence
has been debated at length, and simple expression parsers are available.
See the classical Dragon Book (by Aho, Sethi, Ullman) for details. The
parser for a Lisp-like prefix syntax is sure simpler. The parser for
Forth RPN is even more simpler. But the parser for C expression is not
very complex.

Human beings are another thing. For humans, nothing beyond grunting,
screaming and sleeping is really natural. Parsing infix expression is
unnatural, but so is reading. Both are trained competences. For that
matter, infix notation for expression is taught in school and we all
know how to deal with it (even operator precedence). C infix notation
just capitalizes on this, in the same way that C, Lisp, Forth and most
other programming languages capitalize on the programmer's training in
reading.


	--Thomas Pornin
From: Greg Menke
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <m3is3zwcxs.fsf@athena.pienet>
······@nerim.net (Thomas Pornin) writes:

> According to Duane Rettig  <·····@franz.com>:
> 
> Human beings are another thing. For humans, nothing beyond grunting,
> screaming and sleeping is really natural. 

You forgot to add using emacs to that list. ;)

Gregm
From: clvrmnky
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <N12Yd.93563$vO1.582556@nnrp1.uunet.ca>
On 10/03/2005 1:52 PM, Greg Menke wrote:
> ······@nerim.net (Thomas Pornin) writes:
> 
>>According to Duane Rettig  <·····@franz.com>:
>>
>>Human beings are another thing. For humans, nothing beyond grunting,
>>screaming and sleeping is really natural. 
>  
> You forgot to add using emacs to that list. ;)
> 
Is that a typo?  I'm pretty sure you meant "vi."

-- 
Clever "Never Miss a Chance to Resurrect the VI-Emacs Holy War" Monkey
From: Andreas Kochenburger
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <osh331p545nm08j57nplqlsjq1674jebca@4ax.com>
You guys will probably also find the history of Smalltalk interesting
http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html

Download the pdf-file, get your reading glasses ready ;-) I found it a
delightful reading during a business trip.

PS Lisp is in the story, but Forth sadly not.


Cheers, Andreas

http://www.minforth.net.ms/
From: Anton Ertl
Subject: Syntax and stack depth (was: [OT] PostLisp, a language experiment)
Date: 
Message-ID: <2005Mar13.095848@mips.complang.tuwien.ac.at>
Duane Rettig <·····@franz.com> writes:
>Bernd Paysan <············@gmx.de> writes:
>> This has been crossposted to comp.lang.forth. Forth is RPN, therefore the
>> syntax is completely without parentheses.

Actually, the question of parentheses is orthogonal to the question of
prefix (PN) or postfix (RPN).  I.e., you can have a prefix language
without parentheses, or a postfix language with parentheses.

The real issue here is that Lisp programs are trees (S-Expressions)
and there is no programmer-visible stack (or at least it is rarely
visible), whereas in Forth the program is a sequence of words that
work on the programmer-visible stack.

Now, there are many possible ways to serialize general trees (e.g.,
XML), but prefix is a good one: it works the same with all arities (in
contrast to infix, which works only for arity 2), is not ambiguous
when each operator has only one arity, and you see the operator first.
Lisp wants to support operators with variable and unknown (not yet
defined) arities, so it adds parentheses for disambiguation.  This
also makes the syntax more uniform and thus helps tools like indenting
editors.

If you have a sequence of words that operate on a stack, a postfix
syntax for things representable as trees falls out naturally from
executing the words left-to-right, top-to-bottom.  However, in that
setting you can do many things that you cannot do directly with trees,
like DUP or SWAP, so calling the Fort syntax postfix/RPN is somewhat
misleading.  Parentheses make no sense in that setting (and are
consequently used as comment delimiters in Forth and as string
delimiters in Postscript).

>However, each language has vastly different
>ways of handling argument _mismatches_, which is what I want to
>concentrate on here:
...
>Forth [I have to plead ignorance as to what technologies have been
>added since 1978, so my score may be too low] uses RPN to match arguments
>to a word by pushing them onto a stack and having them be there when
>the word is invoked.  It is the word's responsibility to pop the correct
>number of arguments off, and the calling word's responsibility to push
>the correct number of arguments, so if there is  mismatch, the user stack
>will end up either growing or shrinking incorrectly if there is a
>mismatch.
>Again, there may be new standards in Forth which allow for either
>compile-time or run-time argument match checking, but back in 1978, my
>greatest challenge in Forth was to know where the stack was at all times.
>There were even coding worksheets, which provided a line for each word
>as it was executed, and how large the stack was at each point.  I still
>use this kind of charting whenever I build byte-interpreters- my
>byte-interpreters usually tend to look very Forth-like.
>I give Forth a 1 for simplicity, but nothing for robustness in the
>face of incorrect calls.  Of course, when I was in the position of
>fighting with this I had envisioned a patchable rewrite of the Forth
>defining words to add argument checking during debug - perhaps someone
>else has also done this, and perhaps argument mismatch has long since
>been a solved problem.

Actually most Forth programmers consider it a non-problem, so no,
typically Forth systems have no special tools to deal with it (apart
from the usual separate testing of each word).  There are some efforts
to address such issues, though, e.g. strongForth
(http://home.vr-web.de/stephan.becher/forth/); some people here have
also mentioned writing such things by hand.

The reason why this is considered a non-problem is that people tend to
write definitions short enough that they can keep track of the stack
items; having to document what is on the stack in the middle of the
word (as you mentioned with your forms) is considered an indication
that you have difficulty with that, and should consider breaking the
word down further.

When you do get the stack wrong, this usually shows up pretty close to
the place of the error, because the following words get the wrong
arguments, so the error is typically easy to find.

However, I have recently had a case where that was not the case: I
wrote a word that passes quite a bit of data through locals instead of
the stack, and at one point I apparently forgot a superfluous zero
somewhere.  The word was also extremely long (36 lines), so finding
the place of the bug would have required a little bit of work (I did
not fix it, since the bug did not affect the application).

So I guess that if you use such a locals-oriented programming style,
you might have a use for a stack-depth checking tool.  A simple one
would be something like:

variable entry-depth

: wrap ( -- old-depth )
  entry-depth @
  depth entry-depth ! ;

: check-depth ( -- )
  depth entry-depth @ <> abort" wrong depth" ;

: unwrap ( old-depth -- )
  entry-depth ! ;

Putting the old-depth on the return stack, letting WRAP and UNWRAP
disappear, and finding a nice name for CHECK-DEPTH is left as an
exercise to the student:-).

The use of the words as shown above would be

: foo { all the arguments -- ... }
  wrap >r
  .... code with stack effect ( -- ) ...
  check-depth ... another ( -- ) fragment ...
  check-depth ... more code ...
  r> unwrap ;

Of course, after the exercise, this might look like:

: foo { all the arguments -- ... }
  <0> .... code with stack effect ( -- ) ...
  <0> ... another ( -- ) fragment ...
  <0> ... more code ... ;

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.complang.tuwien.ac.at/forth/ansforth/forth200x.html
From: John Passaniti
Subject: Re: Syntax and stack depth
Date: 
Message-ID: <CWZYd.132051$nC5.81906@twister.nyroc.rr.com>
Anton Ertl wrote:
> Now, there are many possible ways to serialize general trees (e.g., 
> XML), but prefix is a good one: it works the same with all arities
> (in contrast to infix, which works only for arity 2), [...]

Although I'm not disagreeing with the larger substance of what you 
wrote, I don't think this last point is correct.

The ternary operator in C is, I think, an example of an infix operator 
that has arity 3.  Lexically the operator is spread over two characters 
(? and :), but logically, it's all part of a single operator.  One can 
further imagine other kinds of infix operators with arbitrary arity:

x = f $ a,b,c,d;

In this invented infix syntax, the operator "$" applies the function 
argument f in turn on each of the comma-separated arguments that 
follow-- and, well, does something presumably useful with them.  The 
point here is that infix operators can have arbitrary arity.
From: Anton Ertl
Subject: Re: Syntax and stack depth
Date: 
Message-ID: <2005Mar13.181753@mips.complang.tuwien.ac.at>
John Passaniti <····@JapanIsShinto.com> writes:
>Anton Ertl wrote:
>> Now, there are many possible ways to serialize general trees (e.g., 
>> XML), but prefix is a good one: it works the same with all arities
>> (in contrast to infix, which works only for arity 2), [...]
>
>Although I'm not disagreeing with the larger substance of what you 
>wrote, I don't think this last point is correct.
>
>The ternary operator in C is, I think, an example of an infix operator 
>that has arity 3.  Lexically the operator is spread over two characters 
>(? and :), but logically, it's all part of a single operator.

Ok, good point.  I amend my statement: infix works only for arity >=2.

>  One can 
>further imagine other kinds of infix operators with arbitrary arity:
>
>x = f $ a,b,c,d;
>
>In this invented infix syntax, the operator "$" applies the function 
>argument f in turn on each of the comma-separated arguments that 
>follow-- and, well, does something presumably useful with them.

From the description that sounds more like $ being a binary operator
(with f and the list/tuple "a,b,c,d" as arguments), doing something
like a map or reduce combinator.

But at least the syntax could be for a 5-ary operator ".$.,.,.,.".

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.complang.tuwien.ac.at/forth/ansforth/forth200x.html
From: John Passaniti
Subject: Re: Syntax and stack depth
Date: 
Message-ID: <Ug%Yd.70121$vK5.26571@twister.nyroc.rr.com>
Anton Ertl wrote:
> Ok, good point.  I amend my statement: infix works only for arity
> >=2.

Does that mean that operators like ! and ~ and * (pointer dereference) 
in C aren't considered infix?  I guess they might be considered prefix, 
but I haven't ever heard them described like that.
From: Ulrich Hobelmann
Subject: Re: Syntax and stack depth
Date: 
Message-ID: <39je5mF61fa7hU1@individual.net>
John Passaniti wrote:
> Does that mean that operators like ! and ~ and * (pointer dereference) 
> in C aren't considered infix?  I guess they might be considered prefix, 
> but I haven't ever heard them described like that.

I would definitely call them prefix.
From: Barry Margolin
Subject: Re: Syntax and stack depth
Date: 
Message-ID: <barmar-D0F205.13395913032005@comcast.dca.giganews.com>
In article <···············@individual.net>,
 Ulrich Hobelmann <···········@web.de> wrote:

> John Passaniti wrote:
> > Does that mean that operators like ! and ~ and * (pointer dereference) 
> > in C aren't considered infix?  I guess they might be considered prefix, 
> > but I haven't ever heard them described like that.
> 
> I would definitely call them prefix.

I think the real dichotomy between language syntax styles is in the way 
the parser determines which operands go with which arguments.  "infix" 
is the name typically given to syntax that looks similar to traditional 
mathematical notation: binary operators go between the operands, unary 
operators precede them, binding strength rules are used to resolve 
ambiguities (e.g. a+b*c => a+(b*c)), and parentheses can be used to 
override them.  While not everything in infix notation is actually an 
infix, the name is given to the whole approach because the use of 
infixes is one of its most distinguishing features.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Anton Ertl
Subject: Re: Syntax and stack depth
Date: 
Message-ID: <2005Mar13.193231@mips.complang.tuwien.ac.at>
John Passaniti <····@JapanIsShinto.com> writes:
>Does that mean that operators like ! and ~ and * (pointer dereference) 
>in C aren't considered infix?

Of course not (well, * is also infix).

>  I guess they might be considered prefix, 
>but I haven't ever heard them described like that.

I think Dennis Ritchie writes in the HOPL-II paper that the syntax of
C would be easier if * were postfix instead of prefix.  While we are
at C, what should we call ".[.]"?  I guess a new term is needed for
that.

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.complang.tuwien.ac.at/forth/ansforth/forth200x.html
From: Dave Thompson
Subject: Re: Syntax and stack depth
Date: 
Message-ID: <6vms31d4hdeepnqrj90cnfe3f948favegr@4ax.com>
On Sun, 13 Mar 2005 18:32:31 GMT, ·····@mips.complang.tuwien.ac.at
(Anton Ertl) wrote:

> John Passaniti <····@JapanIsShinto.com> writes:
> >Does that mean that operators like ! and ~ and * (pointer dereference) 
> >in C aren't considered infix?
> 
Right, !lognot ~bitnot *deref &addr -negation +identity and sizeof are
prefix, although actually called unary in the standard. And
[subscript] .member ->member and (funcargs) are considered postfix
operators, not variants of primary or equivalent as in many languages.
This is presumably due to the BCPL heritage where array pointers were
(routinely) computable values, function pointers could be, and
structure pointers probably would have been if they existed at all. 
++ and -- are available in both prefix and postfix with (somewhat)
different semantics. And in C99 the new 'compound literal' syntax is
actually classified as postfix, but AFAICS this was just to jam it
into the existing grammar, semantically it isn't an operator at all.

> Of course not (well, * is also infix).
> 
> >  I guess they might be considered prefix, 
> >but I haven't ever heard them described like that.
> 
> I think Dennis Ritchie writes in the HOPL-II paper that the syntax of
> C would be easier if * were postfix instead of prefix.  While we are
> at C, what should we call ".[.]"?  I guess a new term is needed for
> that.
> 
He was talking specifically about the syntax of _declarators_ where
*ptr is the only prefix -- ary[bound] and func() later func(argtypes)
are postfix. And quoting someone else, although without disputing.

As above [] is just considered postfix. x?y:z is commonly called
ternary, but it is the only example of the class and the C standard
just deals with it individually as the 'Conditional' operator. FWIW it
does associate as p1?t1: p2?t2: p3?t3: f (very) vaguely like COND.

- David.Thompson1 at worldnet.att.net
From: Kenny Tilton
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <t20Yd.26060$534.24652@twister.nyc.rr.com>
Stefan Scholl wrote:
> On 2005-03-10 08:05:47, Ulrich Hobelmann wrote:
> 
> 
>>(Sorry for crossposting this.  If you specifically refer to one language 
>>family only, please post the answer only to one group.)
>>
>>There are several things I (and others) don't really like about Lisp and 
>>Forth, but most live with it.  Today I set out to try a mix of both to 
>>see, how certain things improve.
>>
>>IMHO the worst thing in Lisp is its many parentheses.
> 
> 
> OK. I don't have to read any further.

that is where I stopped, too. :)

kenny

-- 
Cells? Cello? Cells-Gtk?: http://www.common-lisp.net/project/cells/
Why Lisp? http://lisp.tech.coop/RtL%20Highlight%20Film

"Doctor, I wrestled with reality for forty years, and I am happy to 
state that I finally won out over it." -- Elwood P. Dowd
From: lin8080
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <42336C18.98A860E5@freenet.de>
Kenny Tilton schrieb:
> Stefan Scholl wrote:
> > On 2005-03-10 08:05:47, Ulrich Hobelmann wrote:


> >>IMHO the worst thing in Lisp is its many parentheses.

> > OK. I don't have to read any further.

> that is where I stopped, too. :)

hihi

Why not set a nice pair of () around and let it wrecked by the
interpreter.

stefan
From: Will Hartung
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39c53sF5ughqbU1@individual.net>
"Ulrich Hobelmann" <···········@web.de> wrote in message
····················@individual.net...
> (Sorry for crossposting this.  If you specifically refer to one language
> family only, please post the answer only to one group.)
>
> There are several things I (and others) don't really like about Lisp and
> Forth, but most live with it.  Today I set out to try a mix of both to
> see, how certain things improve.
>
> IMHO the worst thing in Lisp is its many parentheses.  While they do a
> great job in structuring the language and allowing features such as
> variable numbers of arguments and macros, they turn every program into a
> ragged tree (while a Forth function typically is one or two lines).  I
> don't say this is terrible, but I thought I might try to improve on it.
>
> The worst thing about Forth IMHO is the stack clutter.  When every
> function has some swaps, dups, overs and nips in it, it takes
> concentration to keep track of the stack in your mind.  Sure, there are
> stack comments, but when you write those, you might just as well use
> local variables instead.

You really should go and look at HP's RPL. This is really a fabulous
language, IMHO.

Reverse Polish Lisp is strikingly concise and powerful.

You get a lot of the dynamism of a Lisp, with the "simplicity" of a Forth
like syntax.

You can look here for a nice description of it.

http://www.systella.fr/~bertrand/rpl2/english.html

Regards,

Will Hartung
(·····@msoft.com)
From: Ulrich Hobelmann
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39ckmtF5takriU1@individual.net>
Will Hartung wrote:
> You really should go and look at HP's RPL. This is really a fabulous
> language, IMHO.
> 
> Reverse Polish Lisp is strikingly concise and powerful.
> 
> You get a lot of the dynamism of a Lisp, with the "simplicity" of a Forth
> like syntax.
> 
> You can look here for a nice description of it.
> 
> http://www.systella.fr/~bertrand/rpl2/english.html

Interesting.  The way they do if-then-else doesn't look too elegant 
though (too C-like).
From: Brian Downing
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <habYd.116071$4q6.32936@attbi_s01>
In article <···············@individual.net>,
Will Hartung <·····@msoft.com> wrote:
> You really should go and look at HP's RPL. This is really a fabulous
> language, IMHO.
> 
> Reverse Polish Lisp is strikingly concise and powerful.
> 
> You get a lot of the dynamism of a Lisp, with the "simplicity" of a Forth
> like syntax.
> 
> You can look here for a nice description of it.
> 
> http://www.systella.fr/~bertrand/rpl2/english.html

Sadly, it doesn't have lexical closures AFAIK.

It would be neat if << n -> << n 1 + 'n' STO n >> >> could work as it
does in CL and Scheme.

-bcd
-- 
*** Brian Downing <bdowning at lavos dot net> 
From: Will Hartung
Subject: Re: [OT] PostLisp, a language experiment
Date: 
Message-ID: <39e6m5F5v2vtgU1@individual.net>
"Brian Downing" <·············@lavos.net> wrote in message
···························@attbi_s01...
> In article <···············@individual.net>,
> Will Hartung <·····@msoft.com> wrote:
> > You really should go and look at HP's RPL. This is really a fabulous
> > language, IMHO.
> >
> > Reverse Polish Lisp is strikingly concise and powerful.
> >
> > You get a lot of the dynamism of a Lisp, with the "simplicity" of a
Forth
> > like syntax.
> >
> > You can look here for a nice description of it.
> >
> > http://www.systella.fr/~bertrand/rpl2/english.html
>
> Sadly, it doesn't have lexical closures AFAIK.
>
> It would be neat if << n -> << n 1 + 'n' STO n >> >> could work as it
> does in CL and Scheme.

yea, it also lacks the ability to "define defining words" so to speak. In
Lisp, we'd use Macros, in Forth you'd use CREATE DOES>, but when I used to
write code for the HP calculator, it was a real joy to work with.

Every now and then I get the urge write an interpreter to play with on a PC,
outside of the calcs environment. Something like RPLIOD akin to
SIOD...modulo the symbolic math routines, of course.

Regards,

Will Hartung
(·····@msoft.com)
From: ·······@netcourrier.com
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1110546235.477165.299260@z14g2000cwz.googlegroups.com>
>
> The worst thing about Forth IMHO is the stack clutter.  When every
> function has some swaps, dups, overs and nips in it, it takes
> concentration to keep track of the stack in your mind.  Sure, there
are
> stack comments, but when you write those, you might just as well use
> local variables instead.
>

Objection! This is not a problem that belong to Forth, but a problem
with you ( or those who wrote the programs you've seen ) who don't know
how to program in Forth.
In a Forth programmer eye, a program that has "stack clutter" is bad,
and begs to be rewritten/redesigned. Good programs minimize stack
juggling, that is nothing more than a DUP or a SWAP or an OVER to glue
things together. Such programs can be read by newbies just an hour
after their first exposure to the langage ( that is, when they finished
to learn Forth ;)

 Amicalement,
  Astrobe
From: alex goldman
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1370180.hMRdqsvFYp@yahoo.com>
·······@netcourrier.com wrote:

> Such programs can be read by newbies just an hour
> after their first exposure to the langage ( that is, when they finished
> to learn Forth ;)

I'm willing to spend 1 hour to add Forth to my CV, considering some Lisp
gurus recommended it. Where do I begin and finish within an hour?

(If it takes me more than 1 hour, you will be billed at $65 per hour rate
for extras)
From: ·······@netcourrier.com
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1110550382.088304.198500@g14g2000cwa.googlegroups.com>
alex goldman wrote:
> ·······@netcourrier.com wrote:
>
> > Such programs can be read by newbies just an hour
> > after their first exposure to the langage ( that is, when they
finished
> > to learn Forth ;)
>
> I'm willing to spend 1 hour to add Forth to my CV, considering some
Lisp
> gurus recommended it. Where do I begin and finish within an hour?
>
> (If it takes me more than 1 hour, you will be billed at $65 per hour
rate
> for extras)

Well, gimme 1 day to write it down :)

 Amicalement,
  Astrobe

PS: I'm half-joking only. When I'll have some spare time, I'll do try
to write a one-hour (or so) tutorial for my own Forth dialect, in one
day (or so).
From: Ulrich Hobelmann
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <39es0cF60l82tU1@individual.net>
alex goldman wrote:
> I'm willing to spend 1 hour to add Forth to my CV, considering some Lisp
> gurus recommended it. Where do I begin and finish within an hour?

I think one hour is extremely unrealistic for any programming language, 
so I'm not sure if you are trolling here.

Thinking Forth (it's online on SourceForge, thanks to lots of nice 
people who TeXed it) is a nice book, but you might want to read the 
GForth tutorial (for instance) first.

> (If it takes me more than 1 hour, you will be billed at $65 per hour rate
> for extras)

No.  You have been warned.
From: Paul E. Bennett
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <d0uq5d$euo$2$8302bc10@news.demon.co.uk>
Ulrich Hobelmann wrote:

> alex goldman wrote:
>> I'm willing to spend 1 hour to add Forth to my CV, considering some Lisp
>> gurus recommended it. Where do I begin and finish within an hour?
> 
> I think one hour is extremely unrealistic for any programming language,
> so I'm not sure if you are trolling here.
> 
> Thinking Forth (it's online on SourceForge, thanks to lots of nice
> people who TeXed it) is a nice book, but you might want to read the
> GForth tutorial (for instance) first.
> 
>> (If it takes me more than 1 hour, you will be billed at $65 per hour rate
>> for extras)
> 
> No.  You have been warned.

If it is just for the purpose of adding Forth to his CV then I think he 
would be better advised to spend his hour reading something entirely 
different. However, if he does read any Forth texts I would hope he is 
going to be more serious about the subject and want to get much deeper into 
it.

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: Howard Lee Harkness
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <8mj631haunn267iekl3rmtv48o3klv5pq8@4ax.com>
"Paul E. Bennett" <···@amleth.demon.co.uk> wrote:

>If it is just for the purpose of adding Forth to his CV then I think he 
>would be better advised to spend his hour reading something entirely 
>different.

In the early 90's, I found that removing Forth from my resume got me a lot more
interviews.

--
Howard Lee Harkness
www.Texas-Domains.com
www.HostPCI.com
From: ·······@netcourrier.com
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1110804483.708659.283340@g14g2000cwa.googlegroups.com>
Ulrich Hobelmann wrote:
> alex goldman wrote:
> > I'm willing to spend 1 hour to add Forth to my CV, considering some
Lisp
> > gurus recommended it. Where do I begin and finish within an hour?
>
> I think one hour is extremely unrealistic for any programming
language,
> so I'm not sure if you are trolling here.
>

You mean: for any non-trivial langage.
Forth focuses on keeping things simple, and this implies that the
langage has to be as simple as possible as well.
I can draw here a basic scheme of a possible "Forth in 1 hour" paper:

1. The data stack, reverse polish notation
2. Interpret vs. compile, definitions
3. fetching, storing, allocating memory
4. Control-flow of programs
5. CREATE and DOES>
6. Immediacy
7. Parsing words
8. The little things I forgot.

Most of these "chapters" should fit in a single page.
This results in raughly 5 minutes per chapter: 2 minutes to read, 3
minutes to experiment.
Of course, the success or the failure of this plan depends a lot both
on the teacher and the student. Of course, this is not " Become a Forth
Guru in one hour" but at least it explains the Alpha and the Omega of
Forth within one hour.

 Amicalement,
  Astrobe
From: Elizabeth D Rather
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <113blgopoodrrd8@news.supernews.com>
<·······@netcourrier.com> wrote in message 
·····························@g14g2000cwa.googlegroups.com...
> ...
> I can draw here a basic scheme of a possible "Forth in 1 hour" paper:
>
> 1. The data stack, reverse polish notation
> 2. Interpret vs. compile, definitions
> 3. fetching, storing, allocating memory
> 4. Control-flow of programs
> 5. CREATE and DOES>
> 6. Immediacy
> 7. Parsing words
> 8. The little things I forgot.
>
> Most of these "chapters" should fit in a single page.
> This results in raughly 5 minutes per chapter: 2 minutes to read, 3
> minutes to experiment.
> Of course, the success or the failure of this plan depends a lot both
> on the teacher and the student. Of course, this is not " Become a Forth
> Guru in one hour" but at least it explains the Alpha and the Omega of
> Forth within one hour.

Looks like a great outline for a presentation, but I'm not sure this 
qualifies anyone to "put Forth on a resume".  Would an hour's presentation 
on swimming qualify a non-swimmer to be on the swim team?

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
================================================== 
From: Albert van der Horst
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <idd5dh.2dr@spenarnc.xs4all.nl>
In article <···············@news.supernews.com>,
Elizabeth D Rather <··········@forth.com> wrote:
><·······@netcourrier.com> wrote in message
>·····························@g14g2000cwa.googlegroups.com...
>> ...
>> I can draw here a basic scheme of a possible "Forth in 1 hour" paper:
>>
>> 1. The data stack, reverse polish notation
>> 2. Interpret vs. compile, definitions
>> 3. fetching, storing, allocating memory
>> 4. Control-flow of programs
>> 5. CREATE and DOES>
>> 6. Immediacy
>> 7. Parsing words
>> 8. The little things I forgot.
>>
>> Most of these "chapters" should fit in a single page.
>> This results in raughly 5 minutes per chapter: 2 minutes to read, 3
>> minutes to experiment.
>> Of course, the success or the failure of this plan depends a lot both
>> on the teacher and the student. Of course, this is not " Become a Forth
>> Guru in one hour" but at least it explains the Alpha and the Omega of
>> Forth within one hour.
>
>Looks like a great outline for a presentation, but I'm not sure this
>qualifies anyone to "put Forth on a resume".  Would an hour's presentation
>on swimming qualify a non-swimmer to be on the swim team?

It is no use putting something on ones resume after one hour study.
You have to convince an interviewer to invite you.

I guess one day is more realistic. I was interested in a job as a
senior perl programmer, having never done anything in perl, on none of
my dozens of project. But after one day (wall clock, not working day)
my Pentium assembler was ported to perl (from Forth), and I sent it
along with the application.
They invited me for an interview. They didn't hire me, but they
apparently didn't doubt my perl abilities. (Some one remarked "nice
perl code". That was it.)

(I think Forth is considerably harder than perl, to get at the
level of "nice forth code". But maybe the OP could do it in one day.
One hour... No. )

>Cheers,
>Elizabeth

Groetjes Albert

P.S. For those interested, the perl version is on my website.
Perl is not a bad language for an assembler, with its associative
text arrays.

--
-- 
Albert van der Horst,Oranjestr 8,3511 RA UTRECHT,THE NETHERLANDS
        One man-hour to invent,
                One man-week to implement,
                        One lawyer-year to patent.
From: ·······@netcourrier.com
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1110889488.863281.166900@z14g2000cwz.googlegroups.com>
[...]
> >Looks like a great outline for a presentation, but I'm not sure this
> >qualifies anyone to "put Forth on a resume".  Would an hour's
presentation
> >on swimming qualify a non-swimmer to be on the swim team?
>

Don't forget that this discussion started with a half-joke from me,
saying that well-written Forth programs can be read by one-hour
newbies; then I was asked where the stuff that lets you learn Forth in
one hour can be found, so that this person can add Forth to his resume
( and he was half-joking too I guess).
So, I completely agree you cannot *master* Forth in 1 hour; my point is
that you can *learn* it in 1 hour.

> It is no use putting something on ones resume after one hour study.

I think that an honnest (wo)man wouldn't add any prograaming langage
until (s)he has written a real-world application with it.

> I guess one day is more realistic.

It has been reported that you can train people to Forth and have them
writting the first lines for their project the next day.

[...]
>
> (I think Forth is considerably harder than perl, to get at the
> level of "nice forth code". But maybe the OP could do it in one day.
> One hour... No. )
>

I don't know. I think it all depends on the person's background. It is
not the same experience for someone who learns Forth as his/her first
langage, someone comming from assembly, someone comming from C, or
someone comming from Java or Python.

[...]
>
> P.S. For those interested, the perl version is on my website.
> Perl is not a bad language for an assembler, with its associative
> text arrays.
> 

How dare you! :)

 Amicalement,
  Astrobe
From: Darin Johnson
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <82mzt4qc6w.fsf@usa.net>
·······@netcourrier.com writes:

> I think that an honnest (wo)man wouldn't add any prograaming langage
> until (s)he has written a real-world application with it.

Oh man.  I _still_ haven't gotten around to doing a real-world
application yet.  Academia doesn't count as real-world, and working on
the OS layer that other stuff sits on top of doesn't count, and
integrating real-world applications together doesn't count, and
maintaining code doesn't count, and utilities don't count, and
creating a real-world app as part of a group means I've only written
a fraction of it.

Of course, this all depends upon what "real-world application" means
:-) Personally, I'd say one can learn a lot more about how to program
in a language by maintaining someone else's sloppy code, or by writing
the real-world application a second time after ripping up the first.
Just writing the application once doesn't really teach much.

Many programers never get beyond the stage of translating how they
write in their favorite language to the new language.  Ie, they think
about how they'd write it in C and then try to do the same in another
language.  They can write an application but they haven't really
picked up on the language much.  Or as someone I knew once said while
commenting on some ugly code, "he can write in Fortran in any language."

-- 
Darin Johnson
    My shoes are too tight, and I have forgotten how to dance -- Babylon 5
From: alex goldman
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1148154.GmABqz6y9U@yahoo.com>
·······@netcourrier.com wrote:

> You mean: for any non-trivial langage.
> Forth focuses on keeping things simple, and this implies that the
> langage has to be as simple as possible as well.

If Forth is very simple, does it mean its implementation, an incremental
compiler, can be very simple as well? 

Are there untyped Forths BTW? Something one could use as a programmable
assembly?
From: John Doty
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <42373E71.7070000@whispertel.LoseTheH.net>
alex goldman wrote:
> ·······@netcourrier.com wrote:
> 
> 
>>You mean: for any non-trivial langage.
>>Forth focuses on keeping things simple, and this implies that the
>>langage has to be as simple as possible as well.
> 
> 
> If Forth is very simple, does it mean its implementation, an incremental
> compiler, can be very simple as well? 

Yes. If you don't need a lot of features built in you can get 
primitives, compiler, command loop, and a few device drivers into 4k 
bytes on some hardware.

> 
> Are there untyped Forths BTW? Something one could use as a programmable
> assembly?

Forth attempts to separate containers from contents. So, for example, 
·@" fetches a typeless "cell". Various operators ("words") assume 
various encodings of the data within the cell: the programmer decides 
what's needed. This principle is strained by complex hardware, though, 
so many dialects have specialized variants of ·@" and its relatives for 
things like characters and floating point.

Forth is the easiest foundation for an assembler in existence, at least 
if you go with its nature and implement a reverse polish assembler 
("operand operator" rather than the other way around). This way, you get 
the expression evaluation and macro expansion facilities of an assembler 
automatically.

-jpd
From: John Doty
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <42376AF7.3000408@whispertel.LoseTheH.net>
John Doty wrote:

> Forth is the easiest foundation for an assembler in existence, at least 
> if you go with its nature and implement a reverse polish assembler 
> ("operand operator" rather than the other way around). This way, you get 
> the expression evaluation and macro expansion facilities of an assembler 
> automatically.

Just to put some numbers on this assertion, I dug up my old 8X300 
microcontroller assembler.

1 variable (the location counter)
5 support definitions
8 operation definitions, one per opcode

That's it. None of these was more than one line long.

-jpd
From: Albert van der Horst
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <idfukp.kv7@spenarnc.xs4all.nl>
In article <················@whispertel.LoseTheH.net>,
John Doty  <···@whispertel.LoseTheH.net> wrote:
<SNIP>
>
>Forth is the easiest foundation for an assembler in existence, at least
>if you go with its nature and implement a reverse polish assembler
>("operand operator" rather than the other way around). This way, you get

Forth is full of surprises. A PostIt FixUp assembler is actually simpler,
and it is operator first. (Look up ciasdis if you're interested.)

>the expression evaluation and macro expansion facilities of an assembler
>automatically.

Still so. And the best error detection in town.

>
>-jpd
>


--
-- 
Albert van der Horst,Oranjestr 8,3511 RA UTRECHT,THE NETHERLANDS
        One man-hour to invent,
                One man-week to implement,
                        One lawyer-year to patent.
From: Ulrich Hobelmann
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <39oqg7F63g14gU1@individual.net>
alex goldman wrote:
> If Forth is very simple, does it mean its implementation, an incremental
> compiler, can be very simple as well? 
> 
> Are there untyped Forths BTW? Something one could use as a programmable
> assembly?

Almost all are typeless.
I don't know what you mean by programmable assembly...
From: John Passaniti
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <jQHZd.622$Yc5.232@news02.roc.ny>
alex goldman wrote:
> Are there untyped Forths BTW?

Huh?  Forth *is* a typeless language.  Items on the stack are a "cell"
which individual words interpret as necessary.  Or put another way, the
type of values on the stack is in the programmer's head, not in the
language.

This is, of course, both a good and bad thing, relative to your
expectations.

> Something one could use as a programmable assembly?

If by "programmable assembly" you mean an assembly language that can be 
extended at compile time, I think that's one valid way to view Forth.
From: John Doty
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <42375DAC.2080405@whispertel.LoseTheH.net>
alex goldman wrote:
> John Passaniti wrote:
> 
> 
>>alex goldman wrote:
>>
>>>Are there untyped Forths BTW?
>>
>>Huh?  Forth *is* a typeless language.  Items on the stack are a "cell"
>>which individual words interpret as necessary.  Or put another way, the
>>type of values on the stack is in the programmer's head, not in the
>>language.
> 
> 
> I didn't know. No one mentioned this when we were comparing it to
> PostScript. If Forth is completely typeless, like assembly, why is it
> rather slower than C/C++? (cf. shootout). Failure to use the registers?
> http://shootout.alioth.debian.org/

Gforth is highly portable and free, but not especially fast. There are 
faster Forth implementations. But a more important issue is that low 
level benchmarks don't show Forth's main speed advantage. The cost of 
modularity in Forth is low relative to other languages, so complete 
applications are often surprisingly fast.

-jpd
From: Bernd Paysan
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <brclg2-mpl.ln1@miriam.mikron.de>
John Doty wrote:
> Gforth is highly portable and free, but not especially fast. There are
> faster Forth implementations.

Also, we have problems with recent GCC versions, where bugs in GCC makes
Gforth much slower than with correct code generation. We have a workaround
for that, but it's not yet released (also, there's a bugfix in GCC for this
problem, but that version of GCC also is not yet released).

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Anton Ertl
Subject: Types (was: PostLisp, a language experiment)
Date: 
Message-ID: <2005Mar16.093832@mips.complang.tuwien.ac.at>
John Passaniti <····@JapanIsShinto.com> writes:
>alex goldman wrote:
>> Are there untyped Forths BTW?
>
>Huh?  Forth *is* a typeless language.  Items on the stack are a "cell"
>which individual words interpret as necessary.

Words like "untyped" and "typeless" usually indicate that the person
who uses them does not know what he is talking about.  E.g., Pascal
fans tend to call Lisp and C typeless or untyped.  These terms are
meaningless and only appropriate in advocacy groups, if at all.

So, what can we say about types in Forth?  

Forth as a language does have types (see
http://www.complang.tuwien.ac.at/forth/dpans-html/dpans3.htm): E.g.,
cells, chars, floats, signed and unsigned integers, addresses with
various alignments, doubles of various sorts, execution tokens, two
string types, etc.  However, the language is designed such that it is
not necessary that implementations keeps track of the types.

Forth implementations typically do not type-check, nor keep track of
types (many implementations do some checking of control-flow stack
items, though).

If you want a shorthand for this state of affairs, say that Forth is
not type-checked.

In theory, Forth systems could keep track of types, and perform
type-checking; to cover all allowed cases, the type-checks would have
to be done at run-time (so the run-time system would have to keep
track of types), but for most of the code the type-checking can be
done statically (so the type-checks and maybe also the generation of
the type information could be optimized away).

However, I think that the type rules given in the standard are not
adequate for such type checking.  E.g., consider the following stack
effects from the standard:

+ ( n1|u1 n2|u2 -- n3|u3 )
* ( n1|u1 n2|u2 -- n3|u3 )

And consider the following code fragments:

+ @
* @

Given that they have the same stack effects, either both code
fragments should be considered potentially type-correct, or both type
errors.  However, "+ @" does certainly occur in many correct Forth
programs, whereas "* @" is pretty much guaranteed to be an error.

There have been several works for doing type checking (or other kinds
of checking) for Forth: Jonah Thomas' dynamic checker (AFAIK never
completed), Jaanus Poial's static type checker, a static type checker
by Bill Ragsdale and Peter Knaggs, and strongForth, a statically
type-checked Forth implementation.  AFAIK they all had to refine the
Forth type system for this purpose.

>Or put another way, the
>type of values on the stack is in the programmer's head, not in the
>language.

Certainly the programmer has to keep track of the types as the
language sees them.  Moreover, the programmer also has a more refined
type model in his head (and to some extent in the documentation), as
in other languages; e.g., the programmer might have the concept of a
sorted array in his head that he passes to a binary search.  This kind
of stuff is not covered by the type system in any language (certainly
not to the extent that type checking of more primitive types is).

>> Something one could use as a programmable assembly?

The way Forth and assembly language deal with types is certainly the
same.

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.complang.tuwien.ac.at/forth/ansforth/forth200x.html
From: alex goldman
Subject: Re: Types (was: PostLisp, a language experiment)
Date: 
Message-ID: <5859852.llaOBFKvEE@yahoo.com>
Anton Ertl wrote:

> John Passaniti <····@JapanIsShinto.com> writes:
>>alex goldman wrote:
>>> Are there untyped Forths BTW?
>>
>>Huh?��Forth�is�a�typeless�language.��Items�on�the�stack�are�a�"cell"
>>which individual words interpret as necessary.
> 
> Words like "untyped" and "typeless" usually indicate that the person
> who uses them does not know what he is talking about.

No, it's you who knows zilch about what you are talking about. Pick up a
book on type theory, typed lambda calculus, computer linguistics, etc. and
you'll see plenty of references to "untyped" or "typeless". QED.

Languages can be typed or untyped, and, orthogonally, safe and unsafe. Safe
untyped languages are more commonly called "dynamically typed" by people
outside of the field. And "untyped" or "typeless" in the terminology of Joe
Hacker means "untyped unsafe", like assembler.
From: Darin Johnson
Subject: Re: Types (was: PostLisp, a language experiment)
Date: 
Message-ID: <82sm2sgh9j.fsf@usa.net>
alex goldman <·····@spamm.er> writes:

> Languages can be typed or untyped, and, orthogonally, safe and unsafe. Safe
> untyped languages are more commonly called "dynamically typed" by people
> outside of the field. And "untyped" or "typeless" in the terminology of Joe
> Hacker means "untyped unsafe", like assembler.

There is also a difference between "abstract" types, in the sense of
what a thing actually does and is used for (an index into an array),
versus primitive and composite types (unsigned 16-bit integer,
structure layouts, etc).  Then there is the issue of whether a
language actually tries to enforce a set of typing rules, which is
what most people mean when they talk about typed or untyped languages,
or safe versus unsafe, or type-checking.

In other words, all data have types, whether or not the language
makes them explicit or attempts to ensure that use of the data
is consistent with their types.

-- 
Darin Johnson
    "Particle Man, Particle Man, doing the things a particle can"
From: Anton Ertl
Subject: Re: Types (was: PostLisp, a language experiment)
Date: 
Message-ID: <2005Mar29.141845@mips.complang.tuwien.ac.at>
alex goldman <·····@spamm.er> writes:
>Languages can be typed or untyped, and, orthogonally, safe and unsafe. Safe
>untyped languages are more commonly called "dynamically typed" by people
>outside of the field. And "untyped" or "typeless" in the terminology of Joe
>Hacker means "untyped unsafe", like assembler.

I am not sure that these words are in the terminology of Joe Hacker.
However, I have seen well-known computer scientists use these terms
for Lisp and C, so these people apparently used these words
differently than you do (I guess you would call Lisp "untyped safe"
and C "typed unsafe", right?).

These words are in the vocabulary of the average computer scientist,
but typically only with a very murky and ad-hoc meaning.  That's why I
recommend avoiding these words.

Even for type theorists and such, while they know very well what they
mean with "typed", they are not that interested in other stuff, so
they might classify everything that does not fit in their theory as
"untyped"; since almost every language of interest in practice does
not fit nicely into theories, almost every language can be called
"untyped".  Of course, another type theory might fit the language, so
the same language could also be called "typed" (and yes, there are
papers about type theory for Forth).  So even type theories don't give
us a good indication of how to use these words.

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.complang.tuwien.ac.at/forth/ansforth/forth200x.html
From: Lars Brinkhoff
Subject: Re: Types (was: PostLisp, a language experiment)
Date: 
Message-ID: <85ekefrimc.fsf@junk.nocrew.org>
·····@mips.complang.tuwien.ac.at (Anton Ertl) writes:
> the programmer might have the concept of a sorted array in his head
> that he passes to a binary search.  This kind of stuff is not
> covered by the type system in any language

This isn't terribly useful or relevant to the discussion at hand, but
always enjoying a Lisp riddle, I came up with this:

  (deftype sorted-sequence (&optional (test #'<=))
    (with-gensyms (predicate)
      (setf (symbol-function predicate)
            (lambda (seq)
              (sorted-sequence-p seq test)))
      `(satisfies ,predicate)))

(Sorted-sequence-p left as an exercise.)
From: Kalle Olavi Niemitalo
Subject: Re: Types
Date: 
Message-ID: <87r7hzgee0.fsf@Astalo.kon.iki.fi>
Lars Brinkhoff <·········@nocrew.org> writes:

>   (deftype sorted-sequence (&optional (test #'<=))
>     (with-gensyms (predicate)
>       (setf (symbol-function predicate)
>             (lambda (seq)
>               (sorted-sequence-p seq test)))
>       `(satisfies ,predicate)))

(flet ((lessp (a b) ...)) (check-type x (sorted-sequence #'lessp)))
apparently cannot be made to work.
From: Ulrich Hobelmann
Subject: Re: Types
Date: 
Message-ID: <39rd36F65ajhmU1@individual.net>
Anton Ertl wrote:
> Words like "untyped" and "typeless" usually indicate that the person
> who uses them does not know what he is talking about.  E.g., Pascal
> fans tend to call Lisp and C typeless or untyped.  These terms are
> meaningless and only appropriate in advocacy groups, if at all.

No.  Forth has no type inference and no programmer-annotated types.  In 
your mind a cell might have a type, but the CPU doesn't care if you try 
to use a XT an an integer.

C is staticly and weakly typed, since it has types but doesn't enforce 
them (strictly).  Forth has no types at the language level.  Sure, it 
*could* have types in some cases, and definitely dynamic typing like the 
Lisps, but it doesn't.

Lisp is dynamically typed, but strongly (unless you turn typechecking 
off for performance).  Java also.  Some other languages (ML, Haskell) 
are both staticly and strongly typed.
From: Stephen Pelc
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <4238067a.595632500@192.168.0.1>
On Tue, 15 Mar 2005 13:40:27 -0800, alex goldman <·····@spamm.er>
wrote:

>If Forth is completely typeless, like assembly, why is it
>rather slower than C/C++? (cf. shootout). Failure to use the registers?
>http://shootout.alioth.debian.org/

I haven't looked at the shootout results, but I have written
a number of Forth code generators. We maintain a Forth benchmark
suite (with results) which is available from our web site.

The range of performance from current desktop Forth systems
varies by at least 10:1, with the majority of native code
compilers in a 4:1 range. Our VFX code generator has been
shown (by others) to be faster than gcc on some benchmarks.

Many Forth compilers come from environments in which code
density is more important than code size. I only have detail
knowledge of our own VFX code generators, but even within
these it is true to say that the PC version (in VFX Forth
for Windows/DOS/Linux) uses additional techniques that are
options for embedded systems compilers, e.g. automatic
inlining of small functions. VFX compilers can and will use
all available registers.

Stephen


--
Stephen Pelc, ··········@INVALID.mpeltd.demon.co.uk
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeltd.demon.co.uk - free VFX Forth downloads
From: Andreas Klimas
Subject: VFX for Linux was: Re: PostLisp, a language experiment
Date: 
Message-ID: <39r1piF62dr5qU1@individual.net>
Stephen Pelc wrote:

 > I only have detail
> knowledge of our own VFX code generators, but even within
> these it is true to say that the PC version (in VFX Forth
> for Windows/DOS/Linux)

so you already have VFX Forth for Linux ?
I can't find any reference on your web site for this
topic

Andreas
From: Stephen Pelc
Subject: Re: VFX for Linux was: Re: PostLisp, a language experiment
Date: 
Message-ID: <4238d761.649111281@192.168.0.1>
On Wed, 16 Mar 2005 16:33:51 +0100, Andreas Klimas
<······@klimas-consulting.com> wrote:

>Stephen Pelc wrote:
>
> > I only have detail
>> knowledge of our own VFX code generators, but even within
>> these it is true to say that the PC version (in VFX Forth
>> for Windows/DOS/Linux)
>
>so you already have VFX Forth for Linux ?
>I can't find any reference on your web site for this
>topic

For the last couple of years I have promised to have this ready for
Christmas! It's an "ample spare time" project. The prototype has been
running for several years, but we have to bring it back into line with
the existing source tree and do the documentation. We put in a new
Linux box before Christmas and haven't touched it this year.

Stephen

--
Stephen Pelc, ··········@INVALID.mpeltd.demon.co.uk
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeltd.demon.co.uk - free VFX Forth downloads
From: Paul E. Bennett
Subject: Re: VFX for Linux was: Re: PostLisp, a language experiment
Date: 
Message-ID: <d1cj2a$436$1$8302bc10@news.demon.co.uk>
Stephen Pelc wrote:

[%X]

>>so you already have VFX Forth for Linux ?
>>I can't find any reference on your web site for this
>>topic
> 
> For the last couple of years I have promised to have this ready for
> Christmas! It's an "ample spare time" project. The prototype has been
> running for several years, but we have to bring it back into line with
> the existing source tree and do the documentation. We put in a new
> Linux box before Christmas and haven't touched it this year.
 
Having also come to Linux from a variety of DesqView-X, DOS, Win 3.11 and 
Win95 I can appreciate the effort it takes to get familiar enough with the 
Linux OS. Before Linux I did have a look at the FreeBSD side but it seemed 
that I was always fighting it to get it to pick up one or two of the 
hardware bits and peices to enable me to communicate with my networked 
printer and other machines (a particular brand of Ethernet and Fast Modem 
card that it was difficlut to find details about).

Anyway, I am waiting but I will not hold my breath.

-- 
********************************************************************
Paul E. Bennett ....................<···········@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ....EBA. http://www.electric-boat-association.org.uk/
********************************************************************
From: Ian Osgood
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1110998305.353612.261540@g14g2000cwa.googlegroups.com>
alex goldman wrote:
> John Passaniti wrote:
>
> > alex goldman wrote:
> >> Are there untyped Forths BTW?
> >
> > Huh?  Forth *is* a typeless language.  Items on the stack are a
"cell"
> > which individual words interpret as necessary.  Or put another way,
the
> > type of values on the stack is in the programmer's head, not in the
> > language.
>
> I didn't know. No one mentioned this when we were comparing it to
> PostScript. If Forth is completely typeless, like assembly, why is it
> rather slower than C/C++? (cf. shootout). Failure to use the
registers?
> http://shootout.alioth.debian.org/

Typing really has little relation to compiled speed.  Quite the
opposite.  The speed of assembly language comes from its exact match to
the target machine's architecture.  Forth *is* an assembly language,
but for a virtual stack machine, not a Linux system.

Forth is slow in the Alioth shootout because it is using gforth, which
is designed for portability, not raw speed.  The old shootout used to
also include Bernd's bigFORTH, a native code Forth compiler for Linux.
Its results were several times faster than gforth.

Unfortunately, the Alioth shootout only accepts languages which are
open source (which excludes the state-of-the-art compilers from PFE and
Forth Inc.) and have a Debian package (excludes bigFORTH).  It would be
interesting to see a comparison of the benchmark programs run on VFX,
SwiftForth, and bigFORTH on similar hardware to Alioth (1.1 GHz
Athlon).

Ian
From: Bernd Paysan
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <o1kqg2-5fp.ln1@miriam.mikron.de>
Ian Osgood wrote:
> Unfortunately, the Alioth shootout only accepts languages which are
> open source (which excludes the state-of-the-art compilers from PFE and
> Forth Inc.) and have a Debian package (excludes bigFORTH).

Looking at the Debian package HOWTO: it seems to be quite easy to create
one, even from other distributions. The remaining thing I need is a Debian
user who's willing to test the created .deb file.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Darin Johnson
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <821xachxi7.fsf@usa.net>
"Ian Osgood" <····@quirkster.com> writes:

> Unfortunately, the Alioth shootout only accepts languages which are
> open source (which excludes the state-of-the-art compilers from PFE and
> Forth Inc.) and have a Debian package (excludes bigFORTH).

Also most open source Forth compilers are very likely to be
running a threaded interpreted model.  Ie, the comparisons would
be between an interpreter and an optimizing compiler, rather than
between optimizing compilers.

I still find it somewhat annoying that people look at the results of
a benchmark for a particular implementation and conclude that the
language itself is inherently slow or fast.  Benchmarks never say
anything about the language itself.

-- 
Darin Johnson
    The opinions expressed are not necessarily those of the
    Frobozz Magic Hacking Company, or any other Frobozz affiliates.
From: Albert van der Horst
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <idnfh0.8pp@spenarnc.xs4all.nl>
In article <··············@usa.net>, Darin Johnson  <······@_usa_._net> wrote:
>"Ian Osgood" <····@quirkster.com> writes:
>
>> Unfortunately, the Alioth shootout only accepts languages which are
>> open source (which excludes the state-of-the-art compilers from PFE and
>> Forth Inc.) and have a Debian package (excludes bigFORTH).
>
>Also most open source Forth compilers are very likely to be
>running a threaded interpreted model.  Ie, the comparisons would
>be between an interpreter and an optimizing compiler, rather than
>between optimizing compilers.
>
>I still find it somewhat annoying that people look at the results of
>a benchmark for a particular implementation and conclude that the
>language itself is inherently slow or fast.  Benchmarks never say
>anything about the language itself.

Well, it is the responsibility of the language proponents to
supply speedy examples to a speed benchmark set. Any one
supplying non-speedy examples to same is not promoting the
language, rather doing a disservice.

There is a qualitative aspect to those elaborate benchmarks too. Is
the language capable of expressing a solutions to all the example
problems. If so, the language doesn't necessarily look bad. This is is
especially the case, if users are warned that it is an interpreter not
a compiler, or there are other advantages like blistering compile
times, or very short compiled programs.

>
>--
>Darin Johnson
>    The opinions expressed are not necessarily those of the
>    Frobozz Magic Hacking Company, or any other Frobozz affiliates.


--
-- 
Albert van der Horst,Oranjestr 8,3511 RA UTRECHT,THE NETHERLANDS
        One man-hour to invent,
                One man-week to implement,
                        One lawyer-year to patent.
From: Darin Johnson
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <82vf7lza9r.fsf@usa.net>
Albert van der Horst <······@spenarnc.xs4all.nl> writes:

> Well, it is the responsibility of the language proponents to
> supply speedy examples to a speed benchmark set. Any one
> supplying non-speedy examples to same is not promoting the
> language, rather doing a disservice.

Except that at various times, the fastest or best versions of
some languages have been commercial.  Forth usually doesn't
get hurt too badly by this, since most interpreted Forth systems
are fast enough that casual users don't notice any sluggishness.
Lisp has had a tougher time though, with users seeing slow
interpreted versions with stop-and-wait garbage collectors as
their first exposure and assuming those are how all Lisps behave.

-- 
Darin Johnson
    Luxury!  In MY day, we had to make do with 5 bytes of swap...
From: Christopher C. Stacy
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <uhdj5nwus.fsf@news.dtpq.com>
Darin Johnson <······@_usa_._net> writes:
> Lisp has had a tougher time though, with users seeing slow
> interpreted versions with stop-and-wait garbage collectors as
> their first exposure and assuming those are how all Lisps behave.

I am wondering when people, who today would be evaluating 
a programming system, ever saw such a thing?

I don't think I've seen anything like that since about 1981, which was
before most programmers graduating from college today were born.
From: Michael Livshin
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <s3psxtjndt.fsf@boss.verisity.com.cmm>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Darin Johnson <······@_usa_._net> writes:
>> Lisp has had a tougher time though, with users seeing slow
>> interpreted versions with stop-and-wait garbage collectors as
>> their first exposure and assuming those are how all Lisps behave.
>
> I am wondering when people, who today would be evaluating 
> a programming system, ever saw such a thing?

ahem.
how about Emacs?

-- 
Hit the philistines three times over the head with the Elisp reference manual.
                -- Michael A. Petonic
From: Darin Johnson
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <82mzswb3pu.fsf@usa.net>
······@news.dtpq.com (Christopher C. Stacy) writes:

> I don't think I've seen anything like that since about 1981, which was
> before most programmers graduating from college today were born.

Emacs-lisp is still popular.  I first used lisp after 1981, on Franz
Lisp which was relatively slow and could have noticeable GC waits.
It came free with BSD Unix, so was widely used.

But part of the point is that people who saw something like that
fifteen years ago are still around, and may still be thinking that
Lisp inherently slow just because of their early exposure.

-- 
Darin Johnson
    "Floyd here now!"
From: Edi Weitz
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <umzsxnzk6.fsf@agharta.de>
On Mon, 21 Mar 2005 08:13:54 GMT, Darin Johnson <······@_usa_._net> wrote:

> Except that at various times, the fastest or best versions of some
> languages have been commercial.  Forth usually doesn't get hurt too
> badly by this, since most interpreted Forth systems are fast enough
> that casual users don't notice any sluggishness.  Lisp has had a
> tougher time though, with users seeing slow interpreted versions
> with stop-and-wait garbage collectors as their first exposure and
> assuming those are how all Lisps behave.

At least for Common Lisp this is not the case - maybe it was a long
time ago.  To the contrary, CMUCL, a non-commercial, open source
implementation produces faster code than the commercial Lisps in many
cases.  You will have a hard time finding a CL implementation that
produces only "slow interpreted" code nowadays... :)

Cheers,
Edi.

-- 

Lisp is not dead, it just smells funny.

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Bernd Paysan
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <nnptg2-liq.ln1@vimes.paysan.nom>
Ian Osgood wrote:
> Unfortunately, the Alioth shootout only accepts languages which are
> open source (which excludes the state-of-the-art compilers from PFE and
> Forth Inc.) and have a Debian package (excludes bigFORTH).

Creating a Debian package is really simple. I added the necessary code to
the Makefile, and created one. If someone can test it, it's at

http://www.jwdt.com/~paysan/bigforth_2.1.0_i386.deb

It has no dependencies in it at the moment.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: C. G. Montgomery
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <d1i7ga$5jb$1@pyrite.mv.net>
Bernd Paysan wrote:

> Ian Osgood wrote:
>> Unfortunately, the Alioth shootout only accepts languages which are
>> open source (which excludes the state-of-the-art compilers from PFE and
>> Forth Inc.) and have a Debian package (excludes bigFORTH).
> 
> Creating a Debian package is really simple. I added the necessary code
> to the Makefile, and created one. If someone can test it, it's at
> 
> http://www.jwdt.com/~paysan/bigforth_2.1.0_i386.deb
> 
> It has no dependencies in it at the moment.
> 
It installed cheerfully on current Debian Sarge with dpkg -i.
I haven't tested it thoroughly, but it starts and correctly executes 
words  and  bye .

cheers   cgm

(followups changed to comp.lang.forth)
From: Anton Ertl
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <2005Mar27.152639@mips.complang.tuwien.ac.at>
"Ian Osgood" <····@quirkster.com> writes:
>Forth is slow in the Alioth shootout because it is using gforth, which
>is designed for portability, not raw speed.

Speed is also one of the goals of Gforth.

However, if the alioth shootout uses the Debian gforth package, it's
no wonder the results are less than stellar: E.g., on a 2.66GHz
Pentium 4 the gforth-fast Debian gforth-0.6.2 package we have here is
2-6 times slower than a gforth-fast from gforth-0.6.2 built for speed
(and if they used the gforth (debugging) binary, the slowdown is even
larger):

       fast  Debian  debug
siev   0.21s  0.42s  0.61s
bubble 0.24s  0.70s  0.77s
matrix 0.14s  0.86s  0.88s
fib    0.30s  0.60s  0.85s

The "fast" column was done with gforth-fast built with gcc-2.95 with
the --enable-force-reg option.

The "Debian" column was done with gforth-fast from the Debian package,
which was built with gcc-3.3.3 (which has gcc bug PR15242), without
--enable-force-reg (ok, we could enable that by default, but better
safe than sorry) and with dynamic native code generation disabled by
default (since our default is to enable this, this is probably
Debian's fault).

The "debug" column was done with the gforth binary from the Debian
package.

Followups to comp.lang.forth

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.complang.tuwien.ac.at/forth/ansforth/forth200x.html
From: Howard Lee Harkness
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <3shm31loq2k68uhkai9b27h355ga5qdhp3@4ax.com>
alex goldman <·····@spamm.er> wrote:

>If Forth is completely typeless, like assembly, why is it
>rather slower than C/C++?

Choice of algorithm overwhelms all other considerations, including choice of
language.  So the question-begging here is meaningless.  As for language speed,
are you referring to time measured by a stopwatch or by a calendar?  The latter
can be more important.

In addition to those observations, I have one other.  In the real world, there
are only two speeds for software.  1) Fast enough, and 2) not fast enough.

--
Howard Lee Harkness
www.Texas-Domains.com
www.HostPCI.com
From: Christopher Browne
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <39dpe2F611dajU3@individual.net>
A long time ago, in a galaxy far, far away, ·······@netcourrier.com wrote:
>>
>> The worst thing about Forth IMHO is the stack clutter.  When every
>> function has some swaps, dups, overs and nips in it, it takes
>> concentration to keep track of the stack in your mind.  Sure, there
> are
>> stack comments, but when you write those, you might just as well use
>> local variables instead.
>
> Objection! This is not a problem that belong to Forth, but a problem
> with you ( or those who wrote the programs you've seen ) who don't know
> how to program in Forth.
> In a Forth programmer eye, a program that has "stack clutter" is bad,
> and begs to be rewritten/redesigned. Good programs minimize stack
> juggling, that is nothing more than a DUP or a SWAP or an OVER to glue
> things together. Such programs can be read by newbies just an hour
> after their first exposure to the langage ( that is, when they finished
> to learn Forth ;)

Note that Lisp has exactly the same property with respect to "list
clutter."

When I was a new newbie, my Lisp code was as full of
car/cdr/cadr/cdar/c??r deteriorata as it could get.  Students that get
"forced" to write a bit of Lisp for some CS course tend to get stuck
in this clutter.

With more experience, and the attitudes that:
 a) I'll actually use more of the language than just the bits some
    prof had in some curriculum
 b) I'm writing for clarity, not to get an assignment done tomorrow

the use of:
 a) Refactoring
 b) Thinking ahead as to what "shape" the parameters for functions
    should be
 c) Specialized types (e.g. - defclass, defstruct)
makes the car/cdr "crud" go away.

Indeed, once in a while, I do a 'grep' across my Lisp code base to see
if there are any of those that should get stamped out.

Brodie's _Thinking Forth_ tells much the same story of how clear Forth
code won't need terribly many stack manipulation words...
-- 
"cbbrowne",·@","gmail.com"
http://linuxdatabases.info/info/linuxdistributions.html
"It is always possible to aglutenate  multiple separate problems into a
single  complex interdependent solution. In  most cases this  is a bad
idea." -- RFC 1925
From: Alexander Repenning
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1110554674.722134.28410@g14g2000cwa.googlegroups.com>
Ulrich Hobelmann wrote:
> IMHO the worst thing in Lisp is its many parentheses. []
> If you want to chain several functions [..] in Lisp this looks like
(foo (bar (baz bla)))

I don't know what is causing the sudden and utterly bizarre parentheses
paranoia. The claim about Lisp using many parens is old and false. Most
popular procedural languages would require you to rewrite your
expression:

foo (bar (baz (bla ())))

8 versus 6 parens: Lisp wins!

If you look at more interesting examples don't forget to count "{" and
"}" as well ...

Alex
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <87y8cui23o.fsf@qrnik.zagroda>
[comp.lang.forth removed]

"Alexander Repenning" <·····@cs.colorado.edu> writes:

> I don't know what is causing the sudden and utterly bizarre parentheses
> paranoia. The claim about Lisp using many parens is old and false. Most
> popular procedural languages would require you to rewrite your
> expression:
>
> foo (bar (baz (bla ())))
>
> 8 versus 6 parens: Lisp wins!

Lisp does require more parens than most languages.

I've just taken a random function written in Emacs Lisp:

(defun kogut-scan-comment (state)
  (let* ((state-here (kogut-state-here))
         (state-outside (or state-here state))
         (indent-inside (cdr (kogut-indent-after-opening state-outside)))
         (nested 1)
         (result nil))
    (forward-char 2)
    (while (null result)
      (skip-chars-forward " \t\n")
      (let ((ch (char-after)))
        (if (kogut-finished-indenting)
            (setq result (cons :found indent-inside))
          (let ((sub-indent-here (kogut-indent-here-quoted)))
            (forward-char)
            (cond ((and (eq ch ?/) (eq (char-after) ?*))
                   (forward-char)
                   (setq nested (1+ nested)))
                  ((and (eq ch ?*) (eq (char-after) ?/))
                   (forward-char)
                   (if (zerop (setq nested (1- nested)))
                       (setq result state)))
                  (t (skip-chars-forward "^/*\n")))
            (if sub-indent-here (setq indent-inside sub-indent-here))))))
    result))

and translated it to my language Kogut (without trying to change
idioms, just transliterating as directly as possible, so we can
compare the weigt of analogous syntactic constructs; of course it
would not run because there is no Kogut-powered Emacs):

let KogutScanComment state {
   let stateHere = KogutStateHere();
   let stateOutside = stateHere->IfNull {state};
   var indentInside = (KogutIndentAfterOpening stateOutside).right;
   var nested = 1;
   var result;
   ForwardChar 2;
   loop {
      if (~IsNull result) =>
      SkipCharsForward " \t\n";
      let ch = CharAfter();
      if (KogutFinishedIndenting()) {result = found:indentInside}
      else {
         let subIndentHere = KogutIndentHereQuoted();
         ForwardChar();
         if (ch == "/" & CharAfter() == "*") {
            ForwardChar();
            nested = nested + 1
         }
         (ch == "*" & CharAfter() == "/") {
            ForwardChar();
            if ((nested = nested - 1) == 0) {result = state}
         }
         else {SkipCharsForward "^/*\n"};
         if (~IsNull subIndentHere) {indentInside = subIndentHere}
      };
      again()
   };
   result
};

Result:
Lisp - 108 parens
Kogut - 56 parens and braces (52% of Lisp)

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Pascal Bourguignon
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <87acpakty5.fsf@thalassa.informatimago.com>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> [comp.lang.forth removed]
> 
> "Alexander Repenning" <·····@cs.colorado.edu> writes:
> 
> > I don't know what is causing the sudden and utterly bizarre parentheses
> > paranoia. The claim about Lisp using many parens is old and false. Most
> > popular procedural languages would require you to rewrite your
> > expression:
> >
> > foo (bar (baz (bla ())))
> >
> > 8 versus 6 parens: Lisp wins!
> 
> Lisp does require more parens than most languages.
> 
> I've just taken a random function written in Emacs Lisp:
> [...]
> and translated it to my language Kogut (without trying to change
> idioms, just transliterating as directly as possible, so we can
> compare the weigt of analogous syntactic constructs; of course it
> would not run because there is no Kogut-powered Emacs):
> 
> let KogutScanComment state {
>    let stateHere = KogutStateHere();
>    let stateOutside = stateHere->IfNull {state};
>    var indentInside = (KogutIndentAfterOpening stateOutside).right;
>    var nested = 1;
>    var result;
>    ForwardChar 2;
>    loop {
>       if (~IsNull result) =>
>       SkipCharsForward " \t\n";
>       let ch = CharAfter();
>       if (KogutFinishedIndenting()) {result = found:indentInside}
>       else {
>          let subIndentHere = KogutIndentHereQuoted();
>          ForwardChar();
>          if (ch == "/" & CharAfter() == "*") {
>             ForwardChar();
>             nested = nested + 1
>          }
>          (ch == "*" & CharAfter() == "/") {
>             ForwardChar();
>             if ((nested = nested - 1) == 0) {result = state}
>          }
>          else {SkipCharsForward "^/*\n"};
>          if (~IsNull subIndentHere) {indentInside = subIndentHere}
>       };
>       again()
>    };
>    result
> };
> 
> Result:
> Lisp - 108 parens
> Kogut - 56 parens and braces (52% of Lisp)

You should add semi-colonstoo, since they're significant.

  Kogut - 72 parens, braces and semi-colon (70% of Lisp)

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

In a World without Walls and Fences, 
who needs Windows and Gates?
From: Gareth McCaughan
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <87is3yyqo7.fsf@g.mccaughan.ntlworld.com>
Marcin 'qrczak' Kowalczyk wrote:

> Lisp does require more parens than most languages.

Well, probably, since it uses them where other languages
use other sorts of punctuation. (I consider prepositions
like "in" in contexts like "for x in <some list>" to be
effectively punctuation, too.)

Unless there's some reason to think that parentheses
are intrinsically harmful, the fact that Lisp uses more
parens is not very interesting; it uses fewer braces,
square brackets, colons, whatever. Maybe it ends up
being more punctuation or less, when compared to some
particular language. That isn't terribly interesting
either. What matters is the effect that the punctuation,
combined with a hundred other language features that
somehow seem to get forgotten in discussions like this,
has on the readability[1] and writeability of the
code.

    [1] Readability *to experienced users* is generally
        most important.

[SNIP: a bit of elisp and a translation into Marcin's
own language; the elisp has more parens.]

The two examples are very nearly equal in length.
They contain similar amounts of punctuation. Neither
seems obviously much harder for adepts to read
than the other.

I understand that it wasn't you who introduced the
question of how-many-parentheses; but, for anyone
who thinks it matters: for goodness' sake, why?

-- 
Gareth McCaughan
.sig under construc
From: Ulrich Hobelmann
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <39eujvF61crsaU1@individual.net>
Gareth McCaughan wrote:
> The two examples are very nearly equal in length.
> They contain similar amounts of punctuation. Neither
> seems obviously much harder for adepts to read
> than the other.

Right.  In fact the Lisp version looks less cluttered, since the 
delimiters are more uniform.

> I understand that it wasn't you who introduced the
> question of how-many-parentheses; but, for anyone
> who thinks it matters: for goodness' sake, why?
> 

If you want to program in Lisp I think it doesn't really matter.

Some constructs (like "let") could be more readable, but of course this 
could be done with a macro as well.

I'm mostly experimenting around (maybe to prove to myself that I'm still 
young and wild :D).  The inspirations for this were (a) Forth, which has 
not really much of delimiters at all, and (b) Forth's postfix notation, 
which has the very nice property of having code execute in just that 
written order (unlike a nested function call in Lisp).

I will get over it ;)
From: Alexander Repenning
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <1110645434.413633.258630@f14g2000cwb.googlegroups.com>
Marcin 'Qrczak' Kowalczyk wrote:

> Lisp does require more parens than most languages.
> I've just taken a random function written in Emacs Lisp: [long lisp
function]

No fair ;-)  remember the original discussion was about chained
function calls: foo calls bar calls baz calls bla

Lisp: (foo (bar (baz (bla))))

don't know your Kogut language but judging from  your example you would
have to write:

Kogut: foo (bar (baz (bla())))

8 versus 8 parens: Lisp wins (by default ;-)

also, if the objective is to minimize the number of parens why do you
need to wrap up the condition part of an IF, e.g.,
(KogutFinishedIndenting()) ? You don't need that in Lisp.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <878y4s237y.fsf@qrnik.zagroda>
"Alexander Repenning" <·····@cs.colorado.edu> writes:

> No fair ;-)  remember the original discussion was about chained
> function calls: foo calls bar calls baz calls bla

The claim was that "the claim about Lisp using many parens is old and
false", and this was illustrated by chained function calls.

The point is that even if chained function calls require a similar
amount of parens as e.g. C or even less, syntactic constructs other
than chained function calls push the weight in the direction of Lisp
having more parens. For example local variables are very paren-heavy
in Lisp, compared to almost any other syntax.

I can agree that for Lispers Lisp parens are not a problem (even
though I find them poorly readable); this can't be argued because this
is subjective. But it's clear that there *are* many parens in Lisp,
I will claim that approximately twice more than the average.

As in benchmarks, it's more fair to find actual snippets used in
practice than to invent microbenchmarks which concentrate on a single
feature.

> Lisp: (foo (bar (baz (bla))))
>
> don't know your Kogut language but judging from your example you
> would have to write:
>
> Kogut: foo (bar (baz (bla())))

Actually you can *also* write:

   bla()->baz->bar->foo

The aim of this syntactic sugar is to reduce the need of syntactic
nesting when the mind could follow the computation sequentially.
Mental stack space is not needed here for pending calls, it can
follow the source keeping only one temporary at a time, so the syntax
reflects that.

Lisp let* can be used with the same result, as it also keeps the
nesting level independent from the number of calls, but it's otherwise
quite heavy and requires inventing names for the temporaries.

> also, if the objective is to minimize the number of parens why do you
> need to wrap up the condition part of an IF, e.g.,
> (KogutFinishedIndenting()) ? You don't need that in Lisp.

Yes, there are specific cases when Lisp syntax is more paren-efficient,
I haven't denied that. There are a minority however, they are outvoted
by others.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Pascal Bourguignon
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <87hdjgir1r.fsf@thalassa.informatimago.com>
"Alexander Repenning" <·····@cs.colorado.edu> writes:

> Marcin 'Qrczak' Kowalczyk wrote:
> 
> > Lisp does require more parens than most languages.
> > I've just taken a random function written in Emacs Lisp: [long lisp
> function]
> 
> No fair ;-)  remember the original discussion was about chained
> function calls: foo calls bar calls baz calls bla
> 
> Lisp: (foo (bar (baz (bla))))

If you had a lot of this kind of chained functions calls in lisp, you'd use:

    (funcall (combine foo bar baz bla))

> don't know your Kogut language but judging from  your example you would
> have to write:
> 
> Kogut: foo (bar (baz (bla())))
> 
> 8 versus 8 parens: Lisp wins (by default ;-)

  2*n versus 2: Lisp wins high over.

   
> also, if the objective is to minimize the number of parens why do you
> need to wrap up the condition part of an IF, e.g.,
> (KogutFinishedIndenting()) ? You don't need that in Lisp.
 

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Nobody can fix the economy.  Nobody can be trusted with their finger
on the button.  Nobody's perfect.  VOTE FOR NOBODY.
From: Ulrich Hobelmann
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <39es8cF601f54U1@individual.net>
Alexander Repenning wrote:
> I don't know what is causing the sudden and utterly bizarre parentheses
> paranoia. The claim about Lisp using many parens is old and false. Most
> popular procedural languages would require you to rewrite your
> expression:
> 
> foo (bar (baz (bla ())))
> 
> 8 versus 6 parens: Lisp wins!
> 
> If you look at more interesting examples don't forget to count "{" and
> "}" as well ...

That's why I much prefer Lisp to C dialects (and syntax-wise also to ML).

It's not exactly paranoia; small changes (like
(let (a (foo) b (bar)) (+ a b))
instead of
(let ((a (foo)) (b (bar))) (+ a b))
)
would already make things nicer IMHO.  You lose uninitialized variables 
(compared to CL), but gain the possibility to assign multiple values like:
(let ((a b c) (return-3-vals)) (+ a b c))
in a very readable way.

In general it is the reverse function order in the
(foo (bar (baz bla))) expression (and minimalist inspiration from Forth) 
that made me think.
From: Pascal Bourguignon
Subject: Re: PostLisp, a language experiment
Date: 
Message-ID: <87oedpja84.fsf@thalassa.informatimago.com>
Ulrich Hobelmann <···········@web.de> writes:

> Alexander Repenning wrote:
> > I don't know what is causing the sudden and utterly bizarre parentheses
> > paranoia. The claim about Lisp using many parens is old and false. Most
> > popular procedural languages would require you to rewrite your
> > expression:
> > foo (bar (baz (bla ())))
> > 8 versus 6 parens: Lisp wins!
> > If you look at more interesting examples don't forget to count "{"
> > and
> > "}" as well ...
> 
> That's why I much prefer Lisp to C dialects (and syntax-wise also to ML).
> 
> It's not exactly paranoia; small changes (like
> (let (a (foo) b (bar)) (+ a b))
> instead of
> (let ((a (foo)) (b (bar))) (+ a b))
> )
> would already make things nicer IMHO.  You lose uninitialized
> variables (compared to CL), but gain the possibility to assign
> multiple values like:
> (let ((a b c) (return-3-vals)) (+ a b c))
> in a very readable way.
> 
> In general it is the reverse function order in the
> (foo (bar (baz bla))) expression (and minimalist inspiration from
> Forth) that made me think.

You already got in COMMON-LISP everything you need to write this way
if you want.  The details of the macro have already been discussed
here recently.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

There is no worse tyranny than to force a man to pay for what he does not
want merely because you think it would be good for him. -- Robert Heinlein