From: Fernando Mato Mira
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <37628EBF.207F7A8F@iname.com>
Cameron Laird wrote:

> www.sunworld.com/swol-11-1998/swol-11-regex.html>.  I
> wish the Guile folk well, and certainly RSM and others
> continue to invest energy in it.  From my distant ob-
> servation post, it doesn't feel as though it has the
> critical mass of resources necessary to "take off".

Maybe I'm dreaming, but I have the impression we've just passed the global
minimum in the Lisp usage history. With guile as a unification theme, suddenly
it becomes
very interesting to learn scheme, as a lot of folks end up using one or another
GNU tool. As a side effect, some newcomers to Tk that would have normally used
tcl could end up going  the STk way
(unfortunately, STk is not publicized enough). Things like Kawa also increase
the feel of security regarding code reusability.
And some of those people will end up wanting to write full-fledged applications
"with parenthesis".
But it's probably more the fact that Java hype has quieted down, which make me
hopeful sooner than expected (although not in the same league, the fact that
surprises
_can_ happen, like the Linux/NT "battle for the enterprise" arriving so fast, is
reassuring).

> OK, that's too coy.  I recognize that *I* wanted him
> to have asked that, but he definitely didn't do so.

Well, a hot headline in a classic press trick to attract readers.
Also, the premise was that _I_ believe that it's philosophically broken,
and I wanted to find out whether the disgust for tcl I've seen many people
express had some basis on some generally accepted principles.
[For example, I can think the Eiffel syntax is `philosophically' broken, but
as `infix' syntaxes go, it is not. And it's a good language (I programmed
in it for 2 years  (years ago).. But just because I didn't have a CL compiler
then, and I wanted to do OO :-> )].

From: Fernando Mato Mira
Subject: OO (was: Why is tcl broken?)
Date: 
Message-ID: <3762988E.C94AA252@iname.com>
Fernando Mato Mira wrote:

> [For example, I can think the Eiffel syntax is `philosophically' broken, but
> as `infix' syntaxes go, it is not. And it's a good language (I programmed

Actually, it's a _great_ language to learn SE (for OO pretty much too,  but of
course only in the restricted
single-dispatch way most people know about. I like to call that "Subject Oriented").

Smalltalk taught me OO.
Occam taught me about concurrency.
Eiffel taught me SE.
CLOS taught me what OO really means.

--

"Object Oriented is to not think only about your `subject' and forget about your
`arguments' - the syntactic objects of your sentence"

Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Peter
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <phXineXly-1206991437590001@usrns72.dialup.hawaii.edu>
In article <·················@iname.com>, Fernando Mato Mira
<········@iname.com> wrote:

> Smalltalk taught me OO.
> Occam taught me about concurrency.
> Eiffel taught me SE.
> CLOS taught me what OO really means.

If you are a fan of CLOS, you might want to check out the Dylan
programming language.

It is basically a thoroughly modernized descendant of Lisp/CLOS with an
infix instead of prefix syntax.  In Dylan, almost everything is an object
including the primitive data types (such as integers).  Methods and
classes are also first class objects.

--------------------------------------------------------------------
p h i n e l y @hawaii.edu

Dylan... the high-performance dynamic language
 * open-source Unix version:  http://www.gwydiondylan.org/
 * free Win32 version: http://www.harlequin.com/products/ads/dylan/
---------------------------------------------------------------------
From: Fernando Mato Mira
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <3763C746.5CA8FE78@iname.com>
Peter wrote:

> If you are a fan of CLOS, you might want to check out the Dylan
> programming language.
>
> It is basically a thoroughly modernized descendant of Lisp/CLOS with an
> infix instead of prefix syntax.  In Dylan, almost everything is an object

Although Dylan never really made it into "Fernando's elegibility list"
because
it was not a serious contender against Scheme and CL yet, it definitely got
banished the day the Lisp syntax was dropped. I'm still waiting for the day
when the Dylan fans will reinstate the alternative.

But my dream is a `parenthesized Cecil'
http://www.cs.washington.edu/research/projects/cecil/

or "all you wanted in CLOS (and more) since you got `corrupted' by Eiffel"
;-)

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Andreas Bogk
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <m3u2sby4gf.fsf@soma.andreas.org>
Fernando Mato Mira <········@iname.com> writes:

> Although Dylan never really made it into "Fernando's elegibility list"
> because
> it was not a serious contender against Scheme and CL yet, it definitely got
> banished the day the Lisp syntax was dropped. I'm still waiting for the day
> when the Dylan fans will reinstate the alternative.

This is an age-old flamewar. It just so happens that there are people
who like infix syntax and detest prefix, and then there are people who
love their parens and hate infix.

It's a matter of preferences, I happen to like infix, and I'm not
gonna change.

> But my dream is a `parenthesized Cecil'

You're obviously one of THEM :-).

Andreas

-- 
Reality is two's complement. See:
ftp://ftp.netcom.com/pub/hb/hbaker/hakmem/hacks.html#item154
From: Fernando Mato Mira
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <3764D5C4.68DB9A66@iname.com>
Andreas Bogk wrote:
> 
> Fernando Mato Mira <········@iname.com> writes:

> This is an age-old flamewar. It just so happens that there are people
> who like infix syntax and detest prefix, and then there are people who
> love their parens and hate infix.

I have no idea what flamewars have been going on in comp.lang.dylan,
as it's the first time I set foot here, so FWIW, here's what I think:

1. The Dylan case is different. It was supposed to have 2 syntaxes.
2. I was not excited about the Dylan effort when it came out, as it the
main   motivation was to create a new language for some Apple palmtop,
and this consumed resources that could have been used to improve CL (it
could be argued that it was an `improved Scheme' but how about
supporting EULisp instead?).
3. CMU shuts down CMUCL and starts to work on Dylan. Bad
3. Apple decides to drop the Lisp syntax. Bad.
4. Apple drops Dylan. 
5. Open Source community embraces Dylan. Great. Multimethods to the
mainstream.

The question is: what prevents you now from to going back to the
original idea?
[I'm sorry, if I was funded to spend time myself in creating a Lisp
frontend
for something, it would be for Cecil. If CLOS were too heavy for
something,
there's much more to gain over there. And good prototype-based dialect
would
be a more interesting addition to the Lisp family.]

Regards,

-- 
Fernando D. Mato Mira                    
Real-Time SW Eng & Networking            
Advanced Systems Engineering Division
CSEM                             
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Klaus Schilling
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <87pv2zjhhl.fsf@home.ivm.de>
Andreas Bogk <·······@andreas.org> writes:

> Fernando Mato Mira <········@iname.com> writes:
> 
> > Although Dylan never really made it into "Fernando's elegibility list"
> > because
> > it was not a serious contender against Scheme and CL yet, it definitely got
> > banished the day the Lisp syntax was dropped. I'm still waiting for the day
> > when the Dylan fans will reinstate the alternative.
> 
> This is an age-old flamewar. It just so happens that there are people
> who like infix syntax and detest prefix, and then there are people who
> love their parens and hate infix.

Prefix rocks, infix is crap.

Klaus Schilling
From: Fernando D. Mato Mira
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <3764F3FE.F7CBE6BA@acm.org>
Klaus Schilling wrote:

> Prefix rocks, infix is crap.

Wow, wow, wow. Let's not go there..

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Robin Becker
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <0Sgs3BA5FPZ3Ew6E@jessikat.demon.co.uk>
In article <·················@acm.org>, Fernando D. Mato Mira
<········@acm.org> writes
>Klaus Schilling wrote:
>
>> Prefix rocks, infix is crap.
>
>Wow, wow, wow. Let's not go there..
postfix, prefix, infix etc etc are isosemantic


-- 
tree searching-ly yrs Robin Becker
From: Fernando D. Mato Mira
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <37652B89.6ED1E8CA@acm.org>
Robin Becker wrote:

> postfix, prefix, infix etc etc are isosemantic

Yup. But the deal is to be able to write a parser in an hour (say, in
1950something), and
be setup until the next comet hits the Earth.

Not to mention how invaluable it is not to waste the mandatory compiler
course with the unnecessary half of the `Dragon book'.


And when infix goes wild, a LOT of dollars go down the drain (the one
with a bug-free C++ frontend please raise your hand!).

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Maxwell Sayles
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <3765B3B8.E45F373B@spots.ab.ca>
Robin Becker wrote:

> postfix, prefix, infix etc etc are isosemantic

for myself and the others who might not know... can we get an example of
each?
and i remember someone mentioned parenthesized vs non-parenthesized...

Maxwell Sayles
From: William Tanksley
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <slrn7mbse1.mb7.wtanksle@dolphin.openprojects.net>
On Tue, 15 Jun 1999 01:54:53 GMT, Maxwell Sayles wrote:
>Robin Becker wrote:

>> postfix, prefix, infix etc etc are isosemantic

>for myself and the others who might not know... can we get an example of
>each?
>and i remember someone mentioned parenthesized vs non-parenthesized...

Infix (ML and Prolog):

  3 + (4 minus 10) == 1
(parenthesis are not optional)

Prefix (C, Lisp, Scheme):

  function(that(x(1)))
or:
  (do-this (do-that (x 1)))
(Parenthesis are not optional)

Postfix (Forth, Postscript):

  3 4 + SWAP MOD do-that
  rinse on   agitate  10 seconds   rinse off
(Parenthesis are not optional)

As a common thread, note that in none of the languages are parenthesis
optional.  In Postfix parenthesis are not optional because they have nno
meaning; in the other languages they're assigned arbitrary meaning.

I suspect that the person who mentioned "parenthesised" languages to you
was actually commparing a language which used only parenthesis for
grouping, such as classical Scheme, with a language like C which uses lots
of other punctuation marks as well.

My preference?  I use C at my job.  I prefer postfix (Forth) for its lack
of punctuation.  I like ML and Prolog because they let me create my own
punctuation (but then I don't use them much).

Heck, just learn them all.  There's time -- there's always time to do the
stuff you enjoy.

>Maxwell Sayles

-- 
-William "Billy" Tanksley
Utinam logica falsa tuam philosophiam totam suffodiant!
   :-: May faulty logic undermine your entire philosophy!
From: Fernando Mato Mira
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <37660EE8.D010DA6E@iname.com>
William Tanksley wrote:

> My preference?  I use C at my job.  I prefer postfix (Forth) for its lack
> of punctuation.  I like ML and Prolog because they let me create my own
> punctuation (but then I don't use them much).

BTW, you can do infix stuff in Common Lisp
http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/lisp/code/syntax/0.html

eg.

(defun bar (x y)
   (whatever))

(defun foo (u v x y z)
  (typical-lisp-code)
  (lets-just-copy-a-formula-from-a-book-and-use-it-here #I( u + v * bar(x,y)
^^ z))
  (blah-blah))
From: Harvey J. Stein
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <m2wvx5b0hb.fsf@blinky.bfr.co.il>
········@dolphin.openprojects.net (William Tanksley) writes:

 > On Tue, 15 Jun 1999 01:54:53 GMT, Maxwell Sayles wrote:
 > >Robin Becker wrote:
 > 
 > >> postfix, prefix, infix etc etc are isosemantic
 > 
 > >for myself and the others who might not know... can we get an example of
 > >each?
 > >and i remember someone mentioned parenthesized vs non-parenthesized...
 > 
 > Infix (ML and Prolog):
 > 
 >   3 + (4 minus 10) == 1
 > (parenthesis are not optional)
 > 
 > Prefix (C, Lisp, Scheme):
 > 
 >   function(that(x(1)))
 > or:
 >   (do-this (do-that (x 1)))
 > (Parenthesis are not optional)
 > 
 > Postfix (Forth, Postscript):
 > 
 >   3 4 + SWAP MOD do-that
 >   rinse on   agitate  10 seconds   rinse off
 > (Parenthesis are not optional)
 > 
 > As a common thread, note that in none of the languages are parenthesis
 > optional.  In Postfix parenthesis are not optional because they have nno
 > meaning; in the other languages they're assigned arbitrary meaning.

This isn't a completely fair example.  Postfix can get away without
parentheses only when a) everything that's not a number is a function
and b) the arity (# of arguments) of each function is known and fixed.

Given such restrictions one can also drop the parentheses in prefix
notation.

Infix still requires parentheses even given the above restrictions.

-- 
Harvey J. Stein
Bloomberg LP
·······@bfr.co.il
From: William Tanksley
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <slrn7mdqon.nv7.wtanksle@dolphin.openprojects.net>
On 16 Jun 1999 00:46:56 +0300, Harvey J. Stein wrote:
>········@dolphin.openprojects.net (William Tanksley) writes:

> > On Tue, 15 Jun 1999 01:54:53 GMT, Maxwell Sayles wrote:
> > >Robin Becker wrote:

> > >> postfix, prefix, infix etc etc are isosemantic

> > >for myself and the others who might not know... can we get an example of
> > >each?
> > >and i remember someone mentioned parenthesized vs non-parenthesized...

> > Prefix (C, Lisp, Scheme):

> >   function(that(x(1)))
> > or:
> >   (do-this (do-that (x 1)))
> > (Parenthesis are not optional)

> > Postfix (Forth, Postscript):

> >   3 4 + SWAP MOD do-that
> >   rinse on   agitate  10 seconds   rinse off
> > (Parenthesis are not optional)

> > As a common thread, note that in none of the languages are parenthesis
> > optional.  In Postfix parenthesis are not optional because they have nno
> > meaning; in the other languages they're assigned arbitrary meaning.

>This isn't a completely fair example.  Postfix can get away without
>parentheses only when a) everything that's not a number is a function
>and b) the arity (# of arguments) of each function is known and fixed.

Not true -- in Forth part (a) holds, but part (b) doesn't.  The number of
arguments and number of returns can vary arbitrarily; they're put onto and
taken off of a stack.

And I don't see the relevance of restriction (a).  It seems that with or
without it we would get the same result -- postfix doesn't need
parenthesis.

Oh, for example, Forth "immediate" words.  These aren't the same as other
functions, since they execute at compile time (in all other respects they
are functions, though).

Also, HP RPN uses syntactic elements which are neither numbers nor
functions.

So therefore, neither of your conditions are necessary.

>Given such restrictions one can also drop the parentheses in prefix
>notation.

You're right here, though.

>Infix still requires parentheses even given the above restrictions.

Yes.

We see that postfix never requires parenthesis, and prefix can skip its
parenthesis under certain conditions.  Are there any conditions under
which infix can ignore parenthesis?

>Harvey J. Stein

-- 
-William "Billy" Tanksley
Utinam logica falsa tuam philosophiam totam suffodiant!
   :-: May faulty logic undermine your entire philosophy!
From: James McCartney
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <asynthREMOVE-ya023180001406991402290001@news.io.com>
In article <··············@home.ivm.de>, Klaus Schilling
<···············@home.ivm.de> wrote:


>Prefix rocks, infix is crap.

No, you mean:

Rocks prefix? Is infix crap?

James McCartney  asynth <at> io <dot> com
From: Stig Hemmer
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <ekvn1y2efz0.fsf@gnoll.pvv.ntnu.no>
> Prefix rocks, infix is crap.

(think I (meant you (and (rocks prefix) (is-crap infix))))

Stig Hemmer,
Jack of a Few Trades.
From: William Tanksley
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <slrn7mb2oq.lpd.wtanksle@dolphin.openprojects.net>
On 14 Jun 1999 21:31:47 +0200, Stig Hemmer wrote:
>> Prefix rocks, infix is crap.

>(think I (meant you (and (rocks prefix) (is-crap infix))))

prefix rocks infix crap is   and false =

...as long as we're fighting...  :-)

>Stig Hemmer,
>Jack of a Few Trades.

(I like all the languages I know, except Fortran.  I even like C++,
although I wish they could break backwards compatibility.)

-- 
-William "Billy" Tanksley
Utinam logica falsa tuam philosophiam totam suffodiant!
   :-: May faulty logic undermine your entire philosophy!
From: Fernando Mato Mira
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <37660FEC.D4F3371B@iname.com>
William Tanksley wrote:

> (I like all the languages I know, except Fortran.  I even like C++,
> although I wish they could break backwards compatibility.)

Quiz: What do Scheme, Common Lisp, HPF and Java have in common?
From: Ian Wild
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <37661C02.9CA6E258@cfmu.eurocontrol.be>
Fernando Mato Mira wrote:
> 
> William Tanksley wrote:
> 
> > (I like all the languages I know, except Fortran.  I even like C++,
> > although I wish they could break backwards compatibility.)
> 
> Quiz: What do Scheme, Common Lisp, HPF and Java have in common?

And when's Java*, the massively parallel version, coming out?
From: Fernando Mato Mira
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <376650FA.1AED1606@iname.com>
Ian Wild wrote:

> Fernando Mato Mira wrote:
> >
> > William Tanksley wrote:
> >
> > > (I like all the languages I know, except Fortran.  I even like C++,
> > > although I wish they could break backwards compatibility.)
> >
> > Quiz: What do Scheme, Common Lisp, HPF and Java have in common?
>
> And when's Java*, the massively parallel version, coming out?

I seem to have forgotten C, too..
From: Johan Kullstam
Subject: Re: OO (was: Why is tcl broken?)
Date: 
Message-ID: <m2d7yxrtf7.fsf@sophia.axel.nom>
Fernando Mato Mira <········@iname.com> writes:

> I seem to have forgotten C, too..

you lucky bastard!

-- 
J o h a n  K u l l s t a m
[········@ne.mediaone.net]
Don't Fear the Penguin!
From: Fernando D. Mato Mira
Subject: <language> broken
Date: 
Message-ID: <37645B8D.1A267771@acm.org>
Fernando Mato Mira wrote:

> [For example, I can think the Eiffel syntax is `philosophically' broken, but
> as `infix' syntaxes go, it is not. And it's a good language (I programmed

Correction. There's one thing broken. The Subject Oriented syntax, but as Eiffel
was started in '85, I guess that is understandable.

Ada 95 got this one right. Unfortunately, the semantics qualify as `multiply
broken' regarding this issue:

1. They thought about multiple dispatch, and they broke it on purpose.
2. If you specialize on more than one parameter, it should be the same type
specifier
     for all of them, i.e. only applicable to predicates, copying, arithmetic and
things like that,
     not useful most of the times one wants several specializers.
3. Worse of all, instead of just adopting a left-to-right precedence rule, one that
looks very
     ugly was added, increasing the complexity of a document that is already big
enough.

In '95, at VRAI we refocused our application from AI-oriented to (soft) real-time.
I grabbed the 9X specs, and the initial excitement turned to dissapointment (the
prototype was in CLOS). Moving would have not only meant losing a lot in
flexibility, but, worse of all, we would move only from being third-class to
second-class citizens (SGI platform).
And I would have rather dedicate the extra effort to CL. So after a little
successful experience, and given
the simpler requirements, we went with C++.
Needless to say, the project became a death march. You don't go with C++ unless you
can afford paying 5-10 times more for a lot less functionality.

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Fernando Mato Mira
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <37660BA4.EA85D219@iname.com>
Chris Ebenezer wrote:

> Fernando Mato Mira <········@iname.com> writes:
> : of folks end up using one or another GNU tool. As a side effect, some
> : newcomers to Tk that would have normally used tcl could end up going
> : the STk way (unfortunately, STk is not publicized enough). Things like
> : Kawa also increase the feel of security regarding code reusability.
> : And some of those people will end up wanting to write full-fledged
>
> Why use TK at all ?  Once you are using scheme anyway there are nicer
> widget toolkits to use (from a point of view of more "normal" looking
> toolkits that don't suffer from color allocation problems),  there are
> bindings for guile and gtk and guile and Qt.

I don't now. Some people might just `want to do Tk'.
From: Julian Einwag
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <m1909lpipq.fsf@brightstar.swin.de>
Chris Ebenezer <·······@nortelnetworks.com> writes:

> Why use TK at all ?  Once you are using scheme anyway there are nicer
> widget toolkits to use (from a point of view of more "normal" looking
> toolkits that don't suffer from color allocation problems),  there are
> bindings for guile and gtk and guile and Qt. 

Do you know where you can get the Qt bindings? Im quite interested in 
it.
From: Cameron Laird
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7k9ic6$8oh$1@Starbase.NeoSoft.COM>
In article <··············@brightstar.swin.de>,
Julian Einwag  <······@brightstar.swin.de> wrote:
			.
			.
			.
>Do you know where you can get the Qt bindings? Im quite interested in 
>it.

<URL:http://
starbase.neosoft.com/~claird/comp.lang.python/python_GUI.html#PythonQt>
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: David Thornley
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <sFU93.1567$kS.233449@ptah.visi.com>
In article <···············@nortelnetworks.com>,
Chris Ebenezer  <·······@nortelnetworks.com> wrote:
>
>Fernando Mato Mira <········@iname.com> writes:
>: of folks end up using one or another GNU tool. As a side effect, some
>: newcomers to Tk that would have normally used tcl could end up going
>: the STk way (unfortunately, STk is not publicized enough). Things like
>: Kawa also increase the feel of security regarding code reusability.
>: And some of those people will end up wanting to write full-fledged
>
>Why use TK at all ?  Once you are using scheme anyway there are nicer
>widget toolkits to use (from a point of view of more "normal" looking
>toolkits that don't suffer from color allocation problems),  there are
>bindings for guile and gtk and guile and Qt. 
>
The attraction, to me, would be that Tk is a free, open-source
graphics environment that will run on Unix with X-Windows, the
Macintosh, and Microsoft Windows.  CLIM is commercial, Garnet last
I looked didn't run on Microsoft Windows, and it looks to me like
I should be able to get Macintosh Common Lisp communicating with
Tk through Tcl without *that* much work.

I know little of this Qt animal.  Is it free?  Is it open source?
Does it run on the Macintosh?  Is it easy to set up an interface with?

--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Cameron Laird
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7k9iir$8uq$1@Starbase.NeoSoft.COM>
In article <····················@ptah.visi.com>,
David Thornley <········@visi.com> wrote:
			.
			.
			.
>I know little of this Qt animal.  Is it free?  Is it open source?
>Does it run on the Macintosh?  Is it easy to set up an interface with?
			.
			.
			.
Qt comes in a handful of flavors.  Some are free of charge.
There's an open-sourced work-alike.  None of these run under
MacOS, to the best of my knowledge.  I'm collecting information
about them at <URL:
http://starbase.neosoft.com/~claird/comp.windows.misc/Qt.html>
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: Jason Stokes
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <eTX93.1201$lc5.5327@ozemail.com.au>
On Wed, 16 Jun 1999 21:27:52 GMT, David Thornley <········@visi.com> wrote:


>I know little of this Qt animal.  Is it free?  Is it open source?
>Does it run on the Macintosh?  Is it easy to set up an interface with?

See www.troll.no, but here's the synopsis:

Qt is a cross-platform GUI toolkit for Unix and Windows.  Qt releases a
special "free edition" for Unix only which does match the open source
definition, but with some licensing issues.

The problem is that linking with the Qt free edition libraries is only
allowed if you write open source software yourself.  If you want to write a
commercial application for Qt, you have to buy a license from Troll Tech. 

-- 
Jason Stokes: ·····@bluedog.apana.org.au
From: Juhani Rantanen
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <slrn7mh4sb.b7u.misty@pao.fiba.tampere.fi>
On Wed, 16 Jun 1999 21:27:52 GMT, David Thornley <········@visi.com> wrote:
>I know little of this Qt animal.  Is it free?  Is it open source?
>Does it run on the Macintosh?  Is it easy to set up an interface with?

Qt will be soon licensed with an open-source license, though it will
be practically very restrictive (all changes must be distributed as
patches). Current version of Qt is not free software, and I am
surprised to find one who doesn't know about the flamewars... Try "qt
gtk free" in Altavista or Dejanews to find out:) 

I also suggest looking at Gtk+ <http://www.gtk.org/> that is used in
GIMP (free Photoshop clone for Unix/Linux http://www.gimp.org/), Gnome
(a free desktop environment for GNU project http://www.gnome.org/) .
Gnome folks have made a version of Guile with Gtk and Gnome bindings,
but this is not very widely tested AFAIK, and some higher-level
interface would be preferable (though now we have C-level API mapped
to scheme, the higher-level interface could be written in scheme). 

Gtk+ is available on Unix and Win32. Win32 support is not in the official
sources, but I even occasionally use a version of GIMP on Windows NT
and it is quite robust compared to some commercial programs on the
same operating system (though both Gtk+ and Gimp on that machine are
developer's versions from CVS). 

-- 
Juhani Rantanen
L�hderanta 20 G 55 Espoo  
 This message is best read on a 80x24 tty with 2 colors and a
 non-proportional font
From: David Thornley
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <qjQa3.2280$kS.310807@ptah.visi.com>
In article <···············@nortelnetworks.com>,
Chris Ebenezer  <·······@nortelnetworks.com> wrote:
>
>········@visi.com (David Thornley) writes:
>: >Why use TK at all ?  Once you are using scheme anyway there are nicer
>: >widget toolkits to use (from a point of view of more "normal" looking
>: >toolkits that don't suffer from color allocation problems),  there are
>: >bindings for guile and gtk and guile and Qt. 
>: >
>: The attraction, to me, would be that Tk is a free, open-source
>: graphics environment that will run on Unix with X-Windows, the
>: Macintosh, and Microsoft Windows.  CLIM is commercial, Garnet last
>
I haven't gotten any responses that indicate that there's anything
besides Tk that meets the requirements I've specified:  free (in some
sense or another), open source (in some sense or another) and running
on MacOS, Microsoft Windows and X Windows.  Apparently Qt is free in
a very restricted sense, and doesn't run on MacOS.

>Hi David,
>
>If you want something that is completely free (in the GNU sense) go to
>http://www.gtk.org/, scroll down until you reach the bit on the sidebar
>that reads "Language Bindings".  You should be able to get guile-gtk
>from there.
> 
I'm willing to consider free in the GNU sense, but GTK claims to be
for X under Unix.  I didn't see anything about GTK for the Macintosh
(yes, I have heard RMS discuss Apple Corporation) or Microsoft Windows,
aside from a note from somebody porting it in his spare time.  Given
that I have limited spare time, I'm interested in a toolkit that I can
use on the Mac without doing my own port first.  (Besides, the impression
I get from the discussion on the MS Windows port is that GTK was
designed for X Windows and Unix, and is unnecessarily hard to port
compared to a toolkit designed with a wider focus to begin with.)

It still looks like Tk is the best available graphics package for my
purposes.

--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Fernando Mato Mira
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <376BE1B7.A2CCDB5E@iname.com>
David Thornley wrote:

> besides Tk that meets the requirements I've specified:  free, open source,
> on MacOS, Microsoft Windows and X Windows.

wxWindows.

DrScheme (MzScheme) uses that.
From: Greg Ewing
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <37682DD3.43923243@compaq.com>
Fernando Mato Mira wrote:
> 
> I wanted to find out whether the disgust for tcl I've seen many people
> express had some basis on some generally accepted principles.

I think I can give some reasonably objective reasons why
I find that I enjoy using Python in a way that I don't
enjoy using Tcl. (I have used both, and in one case have
written versions of the same program in both languages.)

1. Syntax

Tcl's syntax is based on the principle that an unadorned
sequence of alphanumeric characters represents a literal.
Anything else (such as referring to a variable) requires
extra syntax.

Ousterhout's justification is that this is optimal
for entering commands interactively, which is probably true.
However, this seems to me the wrong thing to optimise for,
given that tcl is meant to be a *programming* language.
The case being optimised for -- typing in a command which
is to be used once and thrown away -- simply doesn't occur.
Programs, even tiny ones, get used more than once!

In my opinion, the small amount of extra syntax needed
to quote literals and express function calls in a language
like Python is well worth the benefits you get further
down the track. An example of these is that, where in
Tcl you find yourself writing

   set a [expr $b + $c]

in Python you get to write

   a = b + c

Since the vast majority of words in my programs refer
to variables, and string literals are relatively rare,
I much prefer the Python way of writing things to
the Tcl way.

2. Data structures

Briefly, Python has them, Tcl doesn't. Tcl has character
strings, which are sometimes interpreted according to a
certain set of rules as lists of other character strings.
The rules are pedantic and can trip you up sometimes.

Tcl also has associative arrays, which is a powerful
feature compared to what some languages give you, but
they're not first-class objects and all you can put
in them are character strings.

Python has a rich set of first-class data types, including
lists, dictionaries (associative arrays) and classes/instances,
from which you can build real data structures and manipulate
them in easy and intuitive ways.

Tcl proponents will tell you that you can emulate all of
that using tcl lists and arrays, which is true, but misses
the point. Why bother with emulation when you can have the
real thing at no extra cost?

Summary

In my experience, I find that I can do with Python everything
that Tcl was designed for, do it more easily, do a lot
more besides, and have more fun in the process. I believe
the reason for this is rooted in some fundamental design
features of these languages, which I have sketched above.
I think that's about as close as one can get to providing
an objective argument as to whether one language is better
than another for a given purpose.

Greg
From: Donn Cave
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7ka2be$qja$1@nntp6.u.washington.edu>
Greg Ewing <··········@compaq.com> writes:
...
| In my experience, I find that I can do with Python everything
| that Tcl was designed for, do it more easily, do a lot
| more besides, and have more fun in the process. I believe
| the reason for this is rooted in some fundamental design
| features of these languages, which I have sketched above.
| I think that's about as close as one can get to providing
| an objective argument as to whether one language is better
| than another for a given purpose.

How about Expect as an example?  I don't mean to criticize the
existing Python Expect implementation(s), have no idea where the
state of the art is on that.  I just remember trying to think of
a natural Python idiom that would replace the "expect" verb's case
switch flow of control.

My experience is more or less the same as yours - rewrote Tcl
software in Python and was henceforth a convert.  But I think
Tcl is syntactically more adaptable, where Python kind of makes
a virtue of its fixed ways.

	Donn Cave, University Computing Services, University of Washington
	····@u.washington.edu
From: Greg Ewing
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <37698FF8.BE29BB9@compaq.com>
Donn Cave wrote:
> 
> But I think
> Tcl is syntactically more adaptable, where Python kind of makes
> a virtue of its fixed ways.

Yes, someone who likes languages with highly reconfigurable
syntax would perhaps like Tcl for that reason. Personally
I find that Python provides as much flexibility as I need
or want almost all the time. 

Python's keyword arguments are quite useful for designing 
little sub-languages for certain contexts. Tk's option
settings for widgets map naturally onto keyword arguments,
for example.

Greg
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <37776FFA.2D85@mailserver.hursley.ibm.com>
Donn Cave wrote:
> 
> Greg Ewing <··········@compaq.com> writes:
> ...
> | In my experience, I find that I can do with Python everything
> | that Tcl was designed for, do it more easily, do a lot
> | more besides, and have more fun in the process. I believe
> | the reason for this is rooted in some fundamental design
> | features of these languages, which I have sketched above.
> | I think that's about as close as one can get to providing
> | an objective argument as to whether one language is better
> | than another for a given purpose.
> 

Actually you find that you can do everything that YOU want to
in Python and YOU find it easier than doing it in Tcl. I do not
believe that you can do everything in Python that you can do
in Tcl (at least as regards extending the language itself).

> How about Expect as an example?  I don't mean to criticize the
> existing Python Expect implementation(s), have no idea where the
> state of the art is on that.  I just remember trying to think of
> a natural Python idiom that would replace the "expect" verb's case
> switch flow of control.
> 
> My experience is more or less the same as yours - rewrote Tcl
> software in Python and was henceforth a convert.  But I think
> Tcl is syntactically more adaptable, where Python kind of makes
> a virtue of its fixed ways.
> 

Correct. With Tcl you can create new control structures which
are indistinguishable from the built in ones, you cannot do
that in Python. Python is much more rigid in its syntax although
it does have a lot of nice hooks to allow objects to behave
in different ways.


-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lw7looxx7m.fsf@copernico.parades.rm.cnr.it>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

	...

> Correct. With Tcl you can create new control structures which
> are indistinguishable from the built in ones, you cannot do
> that in Python. Python is much more rigid in its syntax although
> it does have a lot of nice hooks to allow objects to behave
> in different ways.

But you can do that much more easily and elegantly in the L-word
language :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Klaus Schilling
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <876748mcxg.fsf@home.ivm.de>
Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:

> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> 
> 	...
> 
> > Correct. With Tcl you can create new control structures which
> > are indistinguishable from the built in ones, you cannot do
> > that in Python. Python is much more rigid in its syntax although
> > it does have a lot of nice hooks to allow objects to behave
> > in different ways.
> 
> But you can do that much more easily and elegantly in the L-word
> language :)

It can be done best in the Scheme, by means of the almighty 
call-with-current-continuation, the best of all control structures,
and define-syntax on top of it.

Klaus Schilling
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <37788BB9.ABD@mailserver.hursley.ibm.com>
Klaus Schilling wrote:
> 
> Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:
> 
> > Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> >
> >       ...
> >
> > > Correct. With Tcl you can create new control structures which
> > > are indistinguishable from the built in ones, you cannot do
> > > that in Python. Python is much more rigid in its syntax although
> > > it does have a lot of nice hooks to allow objects to behave
> > > in different ways.
> >
> > But you can do that much more easily and elegantly in the L-word
> > language :)
> 
> It can be done best in the Scheme, by means of the almighty
> call-with-current-continuation, the best of all control structures,
> and define-syntax on top of it.
> 

Both Tcl and Scheme can do it, "best" is subjective.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lw67471fyi.fsf@copernico.parades.rm.cnr.it>
Nope. *best* is the Common Lisp Macro system, with Scheme/Dylan syntax
stuff a second best.  The notion of 'upvar' in Tcl makes my head
spin. :) Finally, AFAIU, a Tcl "macro" must run as an interpreter of
the spec. A Common Lisp (Scheme) macro is compiled into regular code
by read-time expansion.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Cameron Laird
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7laoq0$bvh$1@Starbase.NeoSoft.COM>
In article <··············@copernico.parades.rm.cnr.it>,
Marco Antoniotti  <·······@copernico.parades.rm.cnr.it> wrote:
			.
			.
			.
>spin. :) Finally, AFAIU, a Tcl "macro" must run as an interpreter of
>the spec. A Common Lisp (Scheme) macro is compiled into regular code
>by read-time expansion.
			.
			.
			.
Are you making a semantic point, or arguing
on the basis of implementation-specific per-
formance?  If the latter, please be aware
that the Tcl community is actively investi-
gating dramatic speedups of [eval].
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lwso7bdnia.fsf@copernico.parades.rm.cnr.it>
> Delivery-Date: Tue, 29 Jun 1999 17:28:08 +0200
> Date: Tue, 29 Jun 1999 10:30:14 -0500 (CDT)
> From: Cameron Laird <······@Starbase.NeoSoft.COM>
> Newsgroups: comp.lang.python,comp.lang.lisp,comp.lang.scheme
> Organization: NeoSoft, Inc. +1 713 968 5800
> content-length: 645
> 
> In article <··············@copernico.parades.rm.cnr.it>,
> Marco Antoniotti  <·······@copernico.parades.rm.cnr.it> wrote:
> 			.
> 			.
> 			.
> >spin. :) Finally, AFAIU, a Tcl "macro" must run as an interpreter of
> >the spec. A Common Lisp (Scheme) macro is compiled into regular code
> >by read-time expansion.
> 			.
> 			.
> 			.
> Are you making a semantic point, or arguing
> on the basis of implementation-specific per-
> formance?

It is (mostly) a semantic point.

> If the latter, please be aware
> that the Tcl community is actively investi-
> gating dramatic speedups of [eval].

I have no problems with that.  But the following example is pretty
hard to beat. :)

==============================================================================
* (defmacro zut (a) `(list ,a))
ZUT

* (defun zot (x) (zut x))
ZOT

* (compile 'zot)
ZOT

* (disassemble 'zot)
071B5790:       .ENTRY ZOT(x)                ; (FUNCTION (T) LIST)
     7A8:       ADD   -18, %CODE
     7AC:       ADD   %CFP, 32, %CSP

     7B0:       CMP   %NARGS, 4              ; %NARGS = #:G0
     7B4:       BNE   L0
     7B8:       NOP
     7BC:       MOVE  %A0, %A4               ; %A0 = #:G1
     7C0:       MOVE  %A4, %A0               ; No-arg-parsing entry point
     7C4:       ADD   4, %ALLOC
     7C8:       ANDN  %ALLOC, 7, %A3
     7CC:       OR    3, %A3
     7D0:       MOVE  %A3, %A1
     7D4:       ST    %A0, [%A1-3]
     7D8:       ST    %NULL, [%A1+1]
     7DC:       TADDCCTV 4, %ALLOC
     7E0:       MOVE  %A3, %A0
     7E4:       MOVE  %CFP, %CSP
     7E8:       MOVE  %OCFP, %CFP
     7EC:       J     %LRA+5

     7F0:       MOVE  %LRA, %CODE
     7F4:       UNIMP 0
     7F8: L0:   UNIMP 10                     ; Error trap
     7FC:       BYTE  #x04
     7FD:       BYTE  #x19                   ; INVALID-ARGUMENT-COUNT-ERROR
     7FE:       BYTE  #xFE, #xED, #x01       ; NARGS
     801:       .ALIGN 4
* 
==============================================================================

Of course I could have defined a much more intricated macro.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Cameron Laird
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7laqeb$e17$1@Starbase.NeoSoft.COM>
In article <··············@copernico.parades.rm.cnr.it>,
Marco Antoniotti  <·······@copernico.parades.rm.cnr.it> wrote:
			.
			.
			.
>I have no problems with that.  But the following example is pretty
>hard to beat. :)
>
>==============================================================================
>* (defmacro zut (a) `(list ,a))
>ZUT
>
>* (defun zot (x) (zut x))
>ZOT
>
>* (compile 'zot)
>ZOT
>
>* (disassemble 'zot)
>071B5790:       .ENTRY ZOT(x)                ; (FUNCTION (T) LIST)
>     7A8:       ADD   -18, %CODE
>     7AC:       ADD   %CFP, 32, %CSP
>
>     7B0:       CMP   %NARGS, 4              ; %NARGS = #:G0
>     7B4:       BNE   L0
>     7B8:       NOP
>     7BC:       MOVE  %A0, %A4               ; %A0 = #:G1
>     7C0:       MOVE  %A4, %A0               ; No-arg-parsing entry point
>     7C4:       ADD   4, %ALLOC
>     7C8:       ANDN  %ALLOC, 7, %A3
>     7CC:       OR    3, %A3
>     7D0:       MOVE  %A3, %A1
>     7D4:       ST    %A0, [%A1-3]
>     7D8:       ST    %NULL, [%A1+1]
>     7DC:       TADDCCTV 4, %ALLOC
>     7E0:       MOVE  %A3, %A0
>     7E4:       MOVE  %CFP, %CSP
>     7E8:       MOVE  %OCFP, %CFP
>     7EC:       J     %LRA+5
>
>     7F0:       MOVE  %LRA, %CODE
>     7F4:       UNIMP 0
>     7F8: L0:   UNIMP 10                     ; Error trap
>     7FC:       BYTE  #x04
>     7FD:       BYTE  #x19                   ; INVALID-ARGUMENT-COUNT-ERROR
>     7FE:       BYTE  #xFE, #xED, #x01       ; NARGS
>     801:       .ALIGN 4
>* 
>==============================================================================
>
>Of course I could have defined a much more intricated macro.
			.
			.
			.
As it happens, there's very good work going on just now
to beef up Tcl's introspective capabilities.  Is *that*--
introspection--the real content of your preference?
Incidentally, many of the same ideas and possibilities
are available to Python, although I don't know of anyone
actively pursuing them for Python.  While Tim Peters im-
presses me with, among much else, his ability to code
clever little methods that tease all sorts of informa-
tion from a Python interpreter, I occasionally argue that
'twould be worth the effort to do introspection for Py-
thon in a more unified way.
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lwr9mvdlqk.fsf@copernico.parades.rm.cnr.it>
······@Starbase.NeoSoft.COM (Cameron Laird) writes:

> >Of course I could have defined a much more intricated macro.
> 			.
> 			.
> 			.
> As it happens, there's very good work going on just now
> to beef up Tcl's introspective capabilities.  Is *that*--
> introspection--the real content of your preference?

Nope, you must correct me if I am wrong, but in Tcl, you set up a new
'form' by writing a "proc".  This "proc" takes some Tcl 'list' (with
strings as leaves) and produces another list (with strings as leaves).

When "proc" is used within some code, its code is executed at
runtime.  This is not what happens in Lisp, Scheme and Dylan, where
macros are evaluated at read-time.  This is pure code transformation,
which when fed in the Common Lisp *compiler* produces inlined code.

Introspection is not something I was very interested in when making my
point. I just wanted to point out that with Common Lisp (and Scheme, and
Dylan) you get a native compiler.

If we want to talk about introspection, have you ever seen a Common
Lisp inspector?  It is based on a lot of "introspective" functions,
which were present in Common Lisp in 1984, 15 years ago.

> Incidentally, many of the same ideas and possibilities
> are available to Python, although I don't know of anyone
> actively pursuing them for Python.  While Tim Peters im-
> presses me with, among much else, his ability to code
> clever little methods that tease all sorts of informa-
> tion from a Python interpreter, I occasionally argue that
> 'twould be worth the effort to do introspection for Py-
> thon in a more unified way.

As Common Lisp does? (sorry, I couldn't resist :) )

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Pierre R. Mai
Subject: The Power of Common Lisp Macros (was Re: Why is tcl broken?)
Date: 
Message-ID: <873dzah7p8.fsf_-_@orion.dent.isdn.cs.tu-berlin.de>
Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:

> When "proc" is used within some code, its code is executed at
> runtime.  This is not what happens in Lisp, Scheme and Dylan, where
> macros are evaluated at read-time.  This is pure code transformation,

Only reader-macros are evaluated at read-time.  Normal macros are
evaluated at compile time.  (I know Marco knows this, just to make
sure those without CL knowledge aren't confused.).

> which when fed in the Common Lisp *compiler* produces inlined code.

What I find more interesting than the fact that you get inlined code,
is the possibility that compile-time evaluation affords you: In effect
macros (and compiler-macros) let you extend and use the normal CL
compiler to do most of the dirty work for you in many situations.
IMHO CL's macros are _the_ most important reason that makes CL so good 
at embedding domain-specific languages.

Take for example Harlequin's Common SQL, which embeds SQL in CL
(without any ugly pre-processors, or other evil stuff, such as C
needs):

(do-query ((name) [select [ename] :from [emp]])
  (print name))

This maps name over the tuples returned by the query and prints them.
The important wrinkle:  The query expression gets optimized and
compiled as part of the normal CL compilation process.  Here do-query
is implemented as a macro, the special SQL-syntax using [] is
implemented as reader-macros on [ (and ]).

Or take this silly macro definition (yes, this would need to pass
around a tcl environment object and probably do a number of other
things, but for simplicity):

(defun compile-tcl-to-lisp (exp)
  "Compiles a TCL expression in Lisp syntax into optimized Common Lisp 
code for run-time execution."
  ...)

(defmacro in-tcl (&body expressions)
  (let ((compiled-code (mapcar #'compile-tcl-to-lisp expressions)))
    `(progn ,@compiled-code)))

Given this (and a suitably lispified TCL syntax, i.e. one which used sexps
as basic units), I could now write embedded TCL in my CL programs, which
would get compiled to optimized native code along with my normal code.
And this works for any language for which you can dream up an apropriate
sexp syntax.  If done correctly, I can let the embedded language access
and modify the normal CL environment.

And all of this is done at the level of CL itself, that is all of this
can be written as portable ANSI Common Lisp code, without ever having
to open up the implementation, or resort to things like C...

Regs, Pierre.

PS:  Since I still think that this thread should never have been
started, especially as a cross-posted one, I've changed the subject
line and set a follow-up to comp.lang.lisp, where the possibilities of 
Common Lisp in this area can be suitably discussed.  Anyone who
disagrees with this judgement may of course ignore the F'up header...

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Greg Ewing
Subject: Re: The Power of Common Lisp Macros (was Re: Why is tcl broken?)
Date: 
Message-ID: <37801BEB.D8D98EE4@compaq.com>
With a macro facility like Scheme's one can
certainly do a lot of elegant things. But,
in my experience, the place where all these
extensible-syntax languages fall down badly
is in the area of error reporting.

Everything is just fine and dandy until you
make a mistake, whereupon the error message
you get is phrased in terms of the code
after macro expansion. Depending on how
radical a transformation the macro has
performed, this can range from slightly
quirky to completely unintelligible.

Does anythong in CL address this issue?

Greg
From: Lars Marius Garshol
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <wkyah35499.fsf@ifi.uio.no>
* Cameron Laird
|
| As it happens, there's very good work going on just now to beef up
| Tcl's introspective capabilities.  Is *that*-- introspection--the
| real content of your preference?

I don't know exactly what Marco had in mind, but this is certainly not
the main value of the Common Lisp macro facility. The main value lies
in being able to extend the language seamlessly with no run-time costs.

Things like the while loop (which does not exist in CL), the Common
Lisp Object System and programming by contract features can be built
on top of the Lisp core using nothing but CL primitives. Personally, I
dream of finding the time to toy with Aspect-Oriented programming in
CL through extension macros.

I know tcl has a similar capability, but I'm unsure whether it is of
comparable power and convenience. 

[From your first posting in this sub-thread]

| Are you making a semantic point, or arguing on the basis of
| implementation-specific performance?

Is there more than one implementation of tcl? If so, does any of them
do macro expansions at compile-time? If not, I think arguments based
on performance also carry considerable weight.

I doubt that you would disagree that C++ has better performance than
does tcl. Given than CL achieves comparable performance to C++ a
conclusion seems near at hand. :) 

(BTW, I don't know if you're aware of this, but Dylan, which you wrote
an excellent article on[1] is essentially Common Lisp bereft of one of
its major features, the S-expression syntax.)

| Incidentally, many of the same ideas and possibilities are available
| to Python, although I don't know of anyone actively pursuing them
| for Python.

Actually, macros, performance and S-expressions are among the things I
miss the most in Python.  bytecodehacks, although certainly cool, are
just a pale shadow of what CL macros provide.

As for introspection, Common Lisp has good support for that as well,
although I very much doubt that that was Marco's point.

| [...] I occasionally argue that 'twould be worth the effort to do
| introspection for Python in a more unified way.

'twould indeed, but I find other issues more pressing.

--Lars M.

[1] <URL: http://www.sunworld.com/swol-03-1999/swol-03-regex.html >
From: Erik Naggum
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3139717961977847@naggum.no>
* ······@Starbase.NeoSoft.COM (Cameron Laird)
| If the latter, please be aware that the Tcl community is actively
| investi- gating dramatic speedups of [eval].

  I used to say "those who do not know Lisp are doomed to reimplement it"
  and mean it humorously.  maybe it's the truth and not funny at all.  sigh.

#:Erik
-- 
@1999-07-22T00:37:33Z -- pi billion seconds since the turn of the century
From: Mike McDonald
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7ldnu8$on4$1@spitting-spider.aracnet.com>
In article <················@naggum.no>,
	Erik Naggum <····@naggum.no> writes:
> * ······@Starbase.NeoSoft.COM (Cameron Laird)
>| If the latter, please be aware that the Tcl community is actively
>| investi- gating dramatic speedups of [eval].
> 
>   I used to say "those who do not know Lisp are doomed to reimplement it"
>   and mean it humorously.  maybe it's the truth and not funny at all.  sigh.
> 
> #:Erik

  Years ago at an X conference in San Jose, Ousterhout gave a talk on Tcl/Tk.
I came away from the talk think "Man, what a poorly implemented Lisp
interpreter that is!"

  Mike McDonald
  ·······@mikemac.com
From: David Thornley
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <Or8f3.958$U5.194478@ptah.visi.com>
In article <················@naggum.no>, Erik Naggum  <····@naggum.no> wrote:
>* ······@Starbase.NeoSoft.COM (Cameron Laird)
>| If the latter, please be aware that the Tcl community is actively
>| investi- gating dramatic speedups of [eval].
>
>  I used to say "those who do not know Lisp are doomed to reimplement it"
>  and mean it humorously.  maybe it's the truth and not funny at all.  sigh.
>
I think it's true and funny.  

(For another point of view, consider Bjarne Stroustrup, who obviously
knows some Lisp and therefore didn't have to reimplement it in C++.)

(Am I the only person on the planet whose favorite programming languages
are Common Lisp and C++, in that order?)



--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Jean-Claude Wippler
Subject: Common Lisp (Was: Why is tcl broken?)
Date: 
Message-ID: <377D1D41.13F003C3@equi4.com>
David,

> Erik Naggum  <····@naggum.no> wrote:
[...]
> >  I used to say "those who do not know Lisp are doomed to reimplement > >  it" and mean it humorously.  maybe it's the truth and not funny at
> >  all.  sigh.
> >
> I think it's true and funny.
> 
> (For another point of view, consider Bjarne Stroustrup, who obviously
> knows some Lisp and therefore didn't have to reimplement it in C++.)
> 
> (Am I the only person on the planet whose favorite programming
> languages are Common Lisp and C++, in that order?)

I don't know.  Here's a question for you: how would you summarize that
preference to somone who knows C++ well and Lisp only from long ago?  
What I'm asking is: what should one read / examine to quickly grasp the
essence of Common Lisp and understand what you meant by this?

-- Jean-Claude
From: David Thornley
Subject: Re: Common Lisp (Was: Why is tcl broken?)
Date: 
Message-ID: <Lzvf3.1094$U5.215048@ptah.visi.com>
In article <·················@equi4.com>,
Jean-Claude Wippler  <···@equi4.com> wrote:
>David,
>
>> Erik Naggum  <····@naggum.no> wrote:
>[...]
>> >  I used to say "those who do not know Lisp are doomed to reimplement > >  it" and mean it humorously.  maybe it's the truth and not funny at
>> >  all.  sigh.
>> >
>> I think it's true and funny.
>> 
>> (For another point of view, consider Bjarne Stroustrup, who obviously
>> knows some Lisp and therefore didn't have to reimplement it in C++.)
>> 
>> (Am I the only person on the planet whose favorite programming
>> languages are Common Lisp and C++, in that order?)
>
>I don't know.  Here's a question for you: how would you summarize that
>preference to somone who knows C++ well and Lisp only from long ago?  
>What I'm asking is: what should one read / examine to quickly grasp the
>essence of Common Lisp and understand what you meant by this?
>
OK, here goes a quick attempt at explaning what I like about C++
and why I like Common Lisp better.

I like the C++ libraries.  With the integration of the STL, C++ has,
built-in, lots of the little algorithms and data structures that
everybody writes for themselves in less civilized languages.  Common
Lisp, on the whole, does about as well at that.  The Loop macro is
very powerful (in CL), but not intuitive to a Lisper and I haven't
bothered learning it so far.

Both languages support the idea of tailoring the language to suit
the application.  In C++, you write classes representing the things
you're trying to work with (as well as a whole lot of other classes)
and write the main program largely in terms of them.  You do much
the same in CL, but it seems to me to be much easier in CL.  (Forth
is another example of an extendible language.)

Both languages are of the "trust-the-programmer" as opposed to "B&D"
nature.  I'm not claiming that one nature is better than the other,
but I know very well which one I like a whole lot better.

C++ is a multi-paradigm language by design, supporting generic,
object-oriented, and procedural programming.  It does not directly
support functional or declarative programming.  Common Lisp is
a multi-paradigm language by nature.  It was initially useful for
functional and procedural programming.  When it became desirable to
make it object-oriented, people wrote extensions in Common Lisp
to make it object-oriented.  The CL macro facility does everything
templates can do to allow generic programming, and more besides.
There is at least one package that will enable you to write CL
programs in the declarative style (Screamer - haven't used it
myself).

But what I really like about Common Lisp is that it lets me defer the
unimportant decisions.  To give an example, let's drop back to C.
Suppose I need a certain data object.  In C, I'd have to immediately
assign it a type, and I'd have to handle it like something of that
type throughout the program.  If it turned out that I had picked the
wrong type, I'd have to do a lot of changes all through the program.
In C++, I can minimize this by making this object a member of its
own class, but there is still a temptation to use a built-in type if
possible.  In CL, I don't have to specify what type it is for a long
time.  I may never have to.  If the type doesn't really matter, then
I have not put any time into selecting one.  If it turns out to
matter, then I can frequently make the decision knowing why it matters.

In general, there are a lot of details in C, and to a somewhat lesser
extent C++, that do not exist by default in CL.  In Common Lisp, I can
do my own memory management if I want to, but if I don't need that
the garbage collector will prevent a host of memory management errors.
In Common Lisp, I can declare the type of everything if I want to,
but I don't have to.  

As a matter of personal history, Common Lisp became my favorite
programming language when I was given a large assignment in it, and I
found out that I could change how certain parts of the program did things
very easily.  I've never found a language since that lets me swap out
data structures and algorithms with such ease, and which prevents such
a large amount of the stupid mistakes I am used to making in C.  (Type
mismatch?  In CL, every object carries its type with it.  Memory
management?  CL does it for me, and if I want to do it myself CL will
still prevent some serious errors.)

My love for CL increased when I started using the standard object-
oriented features, the Common Lisp Object System.  I find that the
system of not having C++-style member functions, but rather writing
functions that just work on classes as I specify, is wonderful.


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Jean-Claude Wippler
Subject: Re: Common Lisp (Was: Why is tcl broken?)
Date: 
Message-ID: <377F3DFB.926C4816@equi4.com>
Hello David,

> OK, here goes a quick attempt at explaning what I like about C++
> and why I like Common Lisp better.

Thanks for your long and thoughtful response on CL and C++.

When I asked the question, I was aware of the fact that this might be
the start of yet another language fight (yawn), but as I hoped it ended
up being a very useful summary of comparative strenghts.  Thanks again.

> [...] But what I really like about Common Lisp is that it lets me
> defer the unimportant decisions.

This could be described as static versus dynamic typing.  Is that what
makes you prefer CL?  Where would you place other languages such as
Python/Tcl/Perl on this scale?

> In general, there are a lot of details in C, and to a somewhat lesser
> extent C++, that do not exist by default in CL.  In Common Lisp, I can
> do my own memory management if I want to, but if I don't need that
> the garbage collector will prevent a host of memory management errors.

What's you position on Java then, which is statically typed but has
automatic memory management?

Let me explain why I'm asking all this: I have been through enough
languages to end up with a near-total indifference to syntax.  The
things you are saying about CL and C++ are what matter.  Unfortunately,
writing code is impossible without *choosing* a syntax and a language.
Like everyone, I'm always pondering what tools to use for a project.
Knowing how much effort / time it takes to become proficient with them.

We all know there is no single "best" tool/language for everything.  But
I'm wondering whether it would help if one were to choose two or three
languages and use them for different layers in larger projects.  The
disadvantage (and risk) is having to learn too much (and failing to use
any of the languages well), but maybe it would work out once several
developers are involved?

I think it is safe to say that C is going to stay with us for a long
time as systems-level language.  Things like stdio, regexp, and in C++
probably also STL, have become de-facto standards.  C is even more
dominant when you look at language *implementations* - it's 99% C.
Which is okay - I like having a strcmp() which is available anywhere.

At the same time, C/C++ is being used in places where more dynamic
languages have immense advantages to offer.  As one example I'm more
familiar with: why is Tk, the GUI side of Tcl which has been tied to a
lot of languages, written in C?  It's big, and evolution is, eh, slow.
Another example: should Netscape consist of millions lines of C/C++?
If the 80/20 rule is worth anything (I happen to think it is), then how
can one justify writing 88% of the code in a language (C) which is very
effective for machine performance yet hampers top productivity?

So, let's suppose one were to choose/merge/combine/rewrite/whatever
several projects to use more than one language for implementation, what
languages should that be?

There are issues of performance, size, complexity, impedance mismatch,
quality, and probably more.  And now I'm trying to figure out at what
end of the spectrum languages such as Common Lisp, Scheme, (CA)ML, Dylan
are, when placed next to things like Forth, and "scripting" languages
such as Tcl/Perl/Python.

To try finishing off an already lengthy post, which is only barely on
topic for comp.lang.lisp, let me add tham I'm currently considering
whether it would make sense to design a "scripting kernel" based on
several languages:
	- Scheme, for representing/executing abstract syntax trees
	- Forth, as low-level super-glue, possibly generated on the fly
	- C/C++ as bottom-line implementation language
	- Machine code, later on, emitted in a JIT manner

All of this, as an experiment to see whether it would make a suitable
platform for *implementing* Tcl, Python, Perl (maybe in that order).

The project is called Colossus.  The ultimate goal, is to create a sort
of platform where the existing language extensions can be tied in once,
where a lot more re-use might be feasible, but most importantly perhaps:
where it becomes possible to experiment with a range of implementation
techniques which might one day benefit all languages involved.

Colossus is an open source project.  I have not announced it, because I
want to better understand various language effects first.  Consider this
a pre-announcement perhaps.  It is obviously a long-term (crazy?) plan,
and it is being discussed by a handful of people, by email.  There are
obviously huge issues ahead (GC vs. ref counting being one).  But there
are also fascinating developments, such as Christian Tismer working on
Stackless Python and closures, without changing the implementation much.

A recent spin-off is "Minotaur", a monstrosity which lets you run Perl
scripts in Python, Python scripts in Tcl, and so on.  It works on Unix,
Windows, (I'm debugging the Mac version now), with some very early trial
linkups to Java, MS COM, Lua, ICI, and PHP indicating that using Forth
as "super-glue" might be effective.

	The Mintoaur homepage is at:
		http://mini.net/pub/ts2/minotaur.html

	Colossus doesn't have a homepage yet.

	A few things I wrote on these subjects can be found at:
		http://www.equi4.com/jcw/scripting4.html
		http://www.equi4.com/jcw/bolts.html

Now, to get back on topic, the big question is: where does CL fit in,
and where/how should it fit in?  Or is all of this so completely mad
that it doesn't matter, because it won't fly anyway...

-- Jean-Claude

P.S.  Please don't start a discussion of why language A is better or
      worse than B.  I'm not interested, and have not crossposted this.
From: David Thornley
Subject: Re: Common Lisp (Was: Why is tcl broken?)
Date: 
Message-ID: <2UNf3.1232$U5.237007@ptah.visi.com>
In article <·················@equi4.com>,
Jean-Claude Wippler  <···@equi4.com> wrote:
>Hello David,
>
>> OK, here goes a quick attempt at explaning what I like about C++
>> and why I like Common Lisp better.
>
>> [...] But what I really like about Common Lisp is that it lets me
>> defer the unimportant decisions.
>
>This could be described as static versus dynamic typing.  Is that what
>makes you prefer CL?  Where would you place other languages such as
>Python/Tcl/Perl on this scale?
>
Dynamic typing is part of it.  I am not familiar enough with Python
or Tcl to comment, but Perl does benefit from it.  I think part of
the flexibility is due to some of the functional programming
support, even though what I do is generally not functional programming.
It does help to be able to build functions on the fly and pass them
around.  The object system is very nice.  If I want to handle a different
sort of class with a generic function, I just write one new method.

>> In general, there are a lot of details in C, and to a somewhat lesser
>> extent C++, that do not exist by default in CL.  In Common Lisp, I can
>> do my own memory management if I want to, but if I don't need that
>> the garbage collector will prevent a host of memory management errors.
>
>What's you position on Java then, which is statically typed but has
>automatic memory management?
>
I don't like Java.  Part of this is simple annoyance:  if the goal
was to make a better object-oriented C, I'm not impressed by the result.
(If the goal was something else, it missed even more.)  The automatic
memory management is good.  C++ tended to use constructors and destructors
for the equivalent of "unwind-protect" functionality as well as for memory
management, but Java has its own "unwind-protect".

On the other hand, Java wants programs to be written its way, with an
object system that looks a bit crippled by C++ standards (and very
crippled by Common Lisp standards).  It has some inconsistencies in
the type system (Int vs. int) that seem to be there for a misplaced
idea of performance, which idea is not supported by other parts of the
language.  There is no way to do any sort of operator overloading,
which makes it difficult to add functionality to the language in a
seamless fashion.  

These are my impressions based on insufficient experience with the
language, so don't take them too seriously.  What actually has
happened is that I've done a few small things in Java and read a
couple of books and gotten annoyed with the language.  Since there
are enough interesting computer languages out there that I'm not
annoyed with yet, I am probably going to learn ML and Python before
I do much with Java again.

Except web applets.  Sigh.  Here's where I will get judgmental:  there
is no inherent advantage of using Java rather than Python or Tcl or
Perl as an applet language.  These languages have, I believe, "sandbox"
abilities to limit what an untrusted program can do.  These languages
are generally single-implementation and open-source:  you can, if you
like, examine the source code to see how these languages implement
sandboxes.  Even if you don't want to do it personally, lots of other
people do read the source.  Java has many implementations, most of which
are closed-source.  There were a lot of little security problems at
first, which shows me that sandbox security in Java is not that easy to
enforce.  Therefore, I'm supposed to trust my Java provided to make sure
that my sandbox stuff stays in the sandbox.  I don't know about you, but
I don't trust all the Java suppliers out there.  The only reason Java
became the applet language is media hype.

>We all know there is no single "best" tool/language for everything.  But
>I'm wondering whether it would help if one were to choose two or three
>languages and use them for different layers in larger projects.  The
>disadvantage (and risk) is having to learn too much (and failing to use
>any of the languages well), but maybe it would work out once several
>developers are involved?
>
This could work well.  The problems that I see are integration, knowing
both languages, and knowing where to switch between them.  In "Programming
Python" (which I've glanced at), there's a section about writing programs
in C and Python, using Python as the high-level language and C wherever
you need the performance.  You can do that also in Tcl.  I haven't seen
the capability advertised as much in Perl.  Given a good Common Lisp
implementation, it's less necessary, since CL does have high-performance
compilers.

>I think it is safe to say that C is going to stay with us for a long
>time as systems-level language.  Things like stdio, regexp, and in C++
>probably also STL, have become de-facto standards.  C is even more
>dominant when you look at language *implementations* - it's 99% C.
>Which is okay - I like having a strcmp() which is available anywhere.
>
C is a good system implementation language.  It is not a good choice for
lots of the things it has been used for.

>At the same time, C/C++ is being used in places where more dynamic
>languages have immense advantages to offer.  As one example I'm more
>familiar with: why is Tk, the GUI side of Tcl which has been tied to a
>lot of languages, written in C?  It's big, and evolution is, eh, slow.

It's also fairly portable.  Tk will run on MacOS, Microsoft Windows,
and X Windows.  I haven't used it much, but the Macintosh text editor
I tend to use is implemented in Tcl/Tk and works very well.  

>Another example: should Netscape consist of millions lines of C/C++?

C++ can be a very good high-level language (and often is misused so
it isn't).  

>If the 80/20 rule is worth anything (I happen to think it is), then how
>can one justify writing 88% of the code in a language (C) which is very
>effective for machine performance yet hampers top productivity?
>
This is a practice that is difficult to justify.  "When all you have
is a hammer, everything looks like a nail."  

>So, let's suppose one were to choose/merge/combine/rewrite/whatever
>several projects to use more than one language for implementation, what
>languages should that be?
>
Depends on the project.  I'd tend to see what I could do in Macintosh
Common Lisp first.  There are things it is not suited for:  when I want
to write a small utility, I have to use something else with a smaller
minimum application size (like C++).  I had to use a Fortran linear
programming library with MCL, and found it extremely frustrating at first.
Eventually, I actually found and interpreted the diagnostics, and it
went very smoothly after that.

>There are issues of performance, size, complexity, impedance mismatch,
>quality, and probably more.  And now I'm trying to figure out at what
>end of the spectrum languages such as Common Lisp, Scheme, (CA)ML, Dylan
>are, when placed next to things like Forth, and "scripting" languages
>such as Tcl/Perl/Python.
>
One problem is that there are different versions of most of these
languages, which differ in things like performance, size, and interfaces
with routines written in other languages.  If I am to run a function in
MCL, the main program has to be in MCL, and functionality from other
languages has to be in shared libraries.

Forth is much less varied, because the basic description of Forth is
very heavy on implementation.  

>[snip]
>
>Now, to get back on topic, the big question is: where does CL fit in,
>and where/how should it fit in?  Or is all of this so completely mad
>that it doesn't matter, because it won't fly anyway...
>
Common Lisp would seem to be near the top of any feeding chain it's
in, if not the top.  It needs to control its own memory, for example.
There is also a tradition that, if a higher-level language than Lisp
is needed, that it can be implemented in Common Lisp.  (Hey, it's
worked before.)  CL therefore tends not to allow room at the top, but
provides a very flexible top.

To be very subjective here, the proper way to interface Common Lisp with
other languages is to write services in other languages for Common Lisp
to use.  (On the other hand, my next project, which I'm really hoping
to spend a few hours at this fall, is putting Macintosh Common Lisp into
the Tk event loop, which contradicts my previous statement.)



--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Jean-Claude Wippler
Subject: Re: Common Lisp (Was: Why is tcl broken?)
Date: 
Message-ID: <37808DE6.32FD8C3B@equi4.com>
David,

> I don't like Java.  Part of this is [...]

Funny, me neither, though I've not been able to find good arguments.

[... interesting discussion about the drawback of a closed Java ...]

> Depends on the project.  I'd tend to see what I could do in Macintosh
> Common Lisp first.  There are things it is not suited for:  when I
> want to write a small utility, I have to use something else with a
> smaller minimum application size (like C++).  [...]

> If I am to run a function in MCL, the main program has to be in MCL,
> and functionality from other languages has to be in shared libraries.
[...]

> >Now, to get back on topic, the big question is: where does CL fit in,
> >and where/how should it fit in?  Or is all of this so completely mad
> >that it doesn't matter, because it won't fly anyway...
>
> Common Lisp would seem to be near the top of any feeding chain it's
> in, if not the top.  It needs to control its own memory, for example.
> There is also a tradition that, if a higher-level language than Lisp
> is needed, that it can be implemented in Common Lisp.  (Hey, it's
> worked before.)  CL therefore tends not to allow room at the top, but
> provides a very flexible top.
> 
> To be very subjective here, the proper way to interface Common Lisp
> with other languages is to write services in other languages for
> Common Lisp to use. 

All the things you say tell me that CL comes with a sizeable run-time
support system.  Just like Python/Tcl/Perl, btw.  Could you summarize
what that contains?  Other than the obvious GC implementation.  Is this
code specific to CL, or would you expect many things to be similar to
what one sees in Python/Tcl/Perl, i.e. stdio / regexp / containers? 
Also, would you say that a lot of the difference between Scheme and CL
is the run-time support part?  Scheme has a relatively small core AFAIK.

> (On the other hand, my next project, which I'm really hoping to spend
> a few hours at this fall, is putting Macintosh Common Lisp into the Tk
> event loop, which contradicts my previous statement.)

Interesting.  Indeed, that would mean CL can be embedded...

I'll stop asking questions now :)

-- Jean-Claude
From: Marco Antoniotti
Subject: Re: Common Lisp (Was: Why is tcl broken?)
Date: 
Message-ID: <lwiu7z2kxz.fsf@copernico.parades.rm.cnr.it>
Jean-Claude Wippler <···@equi4.com> writes:

> David,
> 
> > I don't like Java.  Part of this is [...]
> 
> Funny, me neither, though I've not been able to find good arguments.
> 
> [... interesting discussion about the drawback of a closed Java ...]
> 
> > Depends on the project.  I'd tend to see what I could do in Macintosh
> > Common Lisp first.  There are things it is not suited for:  when I
> > want to write a small utility, I have to use something else with a
> > smaller minimum application size (like C++).  [...]
> 
> > If I am to run a function in MCL, the main program has to be in MCL,
> > and functionality from other languages has to be in shared libraries.
> [...]
> 
> > >Now, to get back on topic, the big question is: where does CL fit in,
> > >and where/how should it fit in?  Or is all of this so completely mad
> > >that it doesn't matter, because it won't fly anyway...
> >
> > Common Lisp would seem to be near the top of any feeding chain it's
> > in, if not the top.  It needs to control its own memory, for example.
> > There is also a tradition that, if a higher-level language than Lisp
> > is needed, that it can be implemented in Common Lisp.  (Hey, it's
> > worked before.)  CL therefore tends not to allow room at the top, but
> > provides a very flexible top.
> > 
> > To be very subjective here, the proper way to interface Common Lisp
> > with other languages is to write services in other languages for
> > Common Lisp to use. 
> 
> All the things you say tell me that CL comes with a sizeable run-time
> support system.  Just like Python/Tcl/Perl, btw.  Could you summarize
> what that contains?  Other than the obvious GC implementation.  Is this
> code specific to CL, or would you expect many things to be similar to
> what one sees in Python/Tcl/Perl, i.e. stdio / regexp / containers? 
> Also, would you say that a lot of the difference between Scheme and CL
> is the run-time support part?  Scheme has a relatively small core
> AFAIK.

A real compiler?  (I.e. a native compiler)
Plus, of course *the* top of the line (alright it is a flame bait :))
Object System.

"quote - STDIO - unquote" is part of the language, REGEXP is available
as it is in C/C++, and as per "containers", you are talking about the
language that invented the very notion of them (another flame bait :)).

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: William Deakin
Subject: Re: Common Lisp (Was: Why is tcl broken?)
Date: 
Message-ID: <378459A6.5BB57F03@pindar.com>
David Thornley wrote:

> ... In "Programming Python" (which I've glanced at), there's a section about
> writing programs
> in C and Python, using Python as the high-level language and C wherever you
> need the performance.  You can do that also in Tcl.  I haven't seen the
> capability advertised as much in Perl....

With the release perl 5.005x there is a perl compiler. This also can dump out
the C containing calls to the perl libraries. Enabling you to roll your own
functions and extensions.

There are also the tools XS (contained with the perl distribution) and SWIG
(see http://www.swig.org for details) that enable the development of
extensions in C/C++.

SWIG also supports python and Tcl/Tk (which I had been aware but never used)
but the web page states that SWIG 'has also been extended to include languages
such as Java, Eiffel, and Guile.' !

Has anybody used this for extending CL?

:-) Will
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3779ED0F.2C67@mailserver.hursley.ibm.com>
Marco Antoniotti wrote:
> 
> Nope. *best* is the Common Lisp Macro system, with Scheme/Dylan syntax
> stuff a second best.  The notion of 'upvar' in Tcl makes my head

I am surprised at this because Tcl's [upvar] is simply an explicit form
of dynamic binding which I seem to remember is what Lisp uses. (That
is how the 'let' function can be (is) implemented as a lambda function).

> spin. :) Finally, AFAIU, a Tcl "macro" must run as an interpreter of
> the spec. A Common Lisp (Scheme) macro is compiled into regular code
> by read-time expansion.
> 

I find that Lisp macros (while a very powerful and necessary mechanism)
are sooo confusing. They are in essence another language inside Lisp,
and as such introduce inconsistencies. Tcl on the other hand doesn't
need a macro language and as such is much more consistent than Lisp.

If I needed a macro language in Tcl I can just write one.

Lisp and Tcl have a lot more in common than Lispers seem to want to 
acknowledge.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lwoghyexy5.fsf@copernico.parades.rm.cnr.it>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> Marco Antoniotti wrote:
> > 
> > Nope. *best* is the Common Lisp Macro system, with Scheme/Dylan syntax
> > stuff a second best.  The notion of 'upvar' in Tcl makes my head
> 
> I am surprised at this because Tcl's [upvar] is simply an explicit form
> of dynamic binding which I seem to remember is what Lisp uses. (That
> is how the 'let' function can be (is) implemented as a lambda
> function).

Ahem!  LET is essentially a macro built on top of LAMBDA application.
Dynamic binding is another beast altogether, which Common Lisp and
Scheme (statically scoped languages) allow in through a back door.
The notion of [upvar] and [uplevel] (with the optional numeric argument which allows
you to inspect N levels of stack is truly hackish.  The utility of
this construct is not in question.  But its use as a
'macro-approximating' device is questionable.

BTW. Is TCL statically or dynamically scoped?

> > spin. :) Finally, AFAIU, a Tcl "macro" must run as an interpreter of
> > the spec. A Common Lisp (Scheme) macro is compiled into regular code
> > by read-time expansion.
> > 
> 
> I find that Lisp macros (while a very powerful and necessary mechanism)
> are sooo confusing. They are in essence another language inside Lisp,
> and as such introduce inconsistencies.

You are very mistaken.  Common Lisp macros are definitively not
another language.  They manipulate S-expressions, which are what Lisp
is made of.  They generate inconsistencies insofar as you are careless
when dealing with possible name capture problems. GENSYM and GENTEMP
are there to help you.

> Tcl on the other hand doesn't
> need a macro language and as such is much more consistent than Lisp.

You severely underestimate the power of Common Lisp (Scheme and Dylan)
macro systems and ascribe to them inconsistencies which are not
there.  It is true that Tcl does not need a macro system. But mostly
because, since you have Common Lisp, you do not need Tcl altogether :)
Apart from some good ole flaming, why doesn't Tcl need a Lisp style
macro system? Just because you have [eval]?  Not a very good
argument. Especially when brought up in front of a Lisp audience.

> 
> If I needed a macro language in Tcl I can just write one.

I just recieved an email from Cameron Laird, citing a 'procm' form in
Tcl, which is supposed to do 'scan time' macros - probably something
in line with the real thing. However, the manual pages for 8.1 at Scriptics
does not mention it.  Yet, I suppose that it is still an experimental
feature that maybe will appear in Tcl in a later edition - a few
lustres after Lisp had them :)

> Lisp and Tcl have a lot more in common than Lispers seem to want to 
> acknowledge.

Ahem!  Do you remember the "revised" Tcl paper by Ousterhout (sorry
for the spelling mistakes :) ), where the essential
addition/correction was the mention of the L-word? :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Joe English
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7ldt5p$srh$1@dragon.flightlab.com>
Marco Antoniotti wrote:

>BTW. Is TCL statically or dynamically scoped?

Neither, really, since Tcl doesn't have nested procedures, lambda
expressions, "let/letrec", local bindings or the like.

I'd describe Tcl as "namespace-scoped": there is an open-ended set
of top-level "namespace" environments, plus a distinguished "global"
environment.  In addition, each Tcl "proc" has its own local environment,
into which variables from other environments can be explicitly imported
via "variable", "global", explicit namespace reference, or "upvar".
"upvar" gives you something *sort* of like dynamic scope, but not
exactly.


--Joe English

  ········@flightlab.com
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lwhfnosvuz.fsf@copernico.parades.rm.cnr.it>
········@flightlab.com (Joe English) writes:

> Marco Antoniotti wrote:
> 
> >BTW. Is TCL statically or dynamically scoped?
> 
> Neither, really, since Tcl doesn't have nested procedures, lambda
> expressions, "let/letrec", local bindings or the like.
> 
> I'd describe Tcl as "namespace-scoped": there is an open-ended set
> of top-level "namespace" environments, plus a distinguished "global"
> environment.  In addition, each Tcl "proc" has its own local environment,
> into which variables from other environments can be explicitly imported
> via "variable", "global", explicit namespace reference, or "upvar".
> "upvar" gives you something *sort* of like dynamic scope, but not
> exactly.

Now, who was saying that Common Lisp macros are a source of
inconsistencies? :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <377B922A.4A7B@mailserver.hursley.ibm.com>
Marco Antoniotti wrote:
> 
> ········@flightlab.com (Joe English) writes:
> 
> > Marco Antoniotti wrote:
> >
> > >BTW. Is TCL statically or dynamically scoped?
> >
> > Neither, really, since Tcl doesn't have nested procedures, lambda
> > expressions, "let/letrec", local bindings or the like.
> >
> > I'd describe Tcl as "namespace-scoped": there is an open-ended set
> > of top-level "namespace" environments, plus a distinguished "global"
> > environment.  In addition, each Tcl "proc" has its own local environment,
> > into which variables from other environments can be explicitly imported
> > via "variable", "global", explicit namespace reference, or "upvar".
> > "upvar" gives you something *sort* of like dynamic scope, but not
> > exactly.
> 
> Now, who was saying that Common Lisp macros are a source of
> inconsistencies? :)
> 

I was referring to the fact that CL macros 'complicate' the parsing /
evaluation rules of Lisp which otherwise would be very simple and
consistent.

I personally like Lisp a lot and take advantage of lots of short cuts
that macros provide but I think that there is a big learning curve to
climb if you want to write one yourself (which is why I referred to it
as another language). Tcl on the other hand allows me to use the same
commands that I use to write normal programs to extend the language by
adding new programming constructs.

Like macros / loathe macros is a subjective thing and probably changes
with your experience of them.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Gareth McCaughan
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <86n1xgwbxr.fsf@g.pet.cam.ac.uk>
Paul Duffin wrote:

> I was referring to the fact that CL macros 'complicate' the parsing /
> evaluation rules of Lisp which otherwise would be very simple and
> consistent.
> 
> I personally like Lisp a lot and take advantage of lots of short cuts
> that macros provide but I think that there is a big learning curve to
> climb if you want to write one yourself (which is why I referred to it
> as another language). Tcl on the other hand allows me to use the same
> commands that I use to write normal programs to extend the language by
> adding new programming constructs.

So does Lisp. If you want (foobar x y) to be the same as (setq y x)
you say

    (defmacro foobar (x y)
      `(setq ,y ,x))

which is hardly terribly hairy. If you want (n-times n (wibble spong))
to be the same as n repetitions of (wibble spong) you say

    (defmacro n-times (n form)
      `(progn . ,(loop repeat n collect form)))

If you want (with-foo (var arg1 arg2) blah blah blah) to be the same
as (let ((var (foo arg1 arg2))) blah), you say

    (defmacro with-foo ((var . args) &body forms)
      `(let ((,var (foo . ,args))) . ,forms))

This is all the same Lisp language as you write functions in.
The backquote form is most commonly used when writing macros,
but it's useful for other things too. For instance, I hacked up
a symbolic differentiation program recently, and it's full of
things like `(+ (* ,(d f x) ,g) (* ,f ,(d g x))). Writing
macros is certainly a slightly different *skill* from writing
ordinary code, but it's not remotely a different *language*.

Incidentally, how would you implement equivalents of those
things in Tcl?

Of course a macro isn't the *same* as a function. But then,
if you want something the same as a function, you can use a
function :-). Perhaps what you want is a function that doesn't
evaluate its arguments. Some old versions of Lisp had things
called FEXPRs, which were exactly that. But, after experience
of using them, it turned out that they weren't the Right Thing,
and something much more like the present macro system was born.
Kent Pitman (who posts regularly here) wrote an article about
why FEXPRs aren't good enough, back around the time when the
transition to macros happened. It's available on the web
somewhere.

If you don't like having to construct code explicitly (though
I find the backquote syntax makes this a non-issue), you might
prefer the Scheme macro system. It's rather interesting.

-- 
Gareth McCaughan            Dept. of Pure Mathematics & Math. Statistics,
················@pobox.com  Cambridge University, England.
From: Gareth Rees
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <sipv2cuwzc.fsf@cre.canon.co.uk>
Gareth McCaughan <················@pobox.com> wrote:
> Kent Pitman wrote an article about why FEXPRs aren't good enough, back
> around the time when the transition to macros happened. It's available
> on the web somewhere.

http://world.std.com/~pitman/Papers/Special-Forms.html

-- 
Gareth Rees
From: Joe English
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7lgd6t$uhn$1@dragon.flightlab.com>
Marco Antoniotti <·······@copernico.parades.rm.cnr.it> wrote:
>········@flightlab.com (Joe English) writes:
>>
>> I'd describe Tcl as "namespace-scoped": there is an open-ended set
>> of top-level "namespace" environments, plus a distinguished "global"
>> environment.  In addition, each Tcl "proc" has its own local environment,
>> into which variables from other environments can be explicitly imported
>> via "variable", "global", explicit namespace reference, or "upvar".
>> "upvar" gives you something *sort* of like dynamic scope, but not
>> exactly.
>
>Now, who was saying that Common Lisp macros are a source of
>inconsistencies? :)


Certainly not me!

But I don't see that Tcl's access rules are a common source
of inconsistencies either.  Since variables have to be explicitly
imported into the current environment, there is little chance of
accidental name-capture.  True, it is possible to write such maintenance
horrors as:

	proc frobnicate-foo {} {
	    upvar 1 foo foo
	    set foo "Frobbed!"
	}

but in practice no Tcl programmer in her right mind would ever
write such a thing.   Generally 'upvar' is only used on names
passed to a procedure in the argument list:

	proc frobnicate-variable {foo} {
	    upvar 1 $foo bar
	    set bar "Frobbed!"
	}
	# ...
	frobnicate-variable someVar

Note that the semantics of 'upvar' are such that accidental
name capture is avoided: [frobnicate-variable foo], and
[frobnicate-variable bar] do the Right Thing even though
'foo' and 'bar' are used as local variables in the procedure body.

Not to say that Tcl is without its pitfalls.  Tcl syntax
isn't *quite* as regular as people often make it out to be
("code is data" just like in Lisp, but unlike Lisp the syntax
of code is very different from the syntax of lists); and there's
that perennial favorite "quoting hell".  But on the whole
I find that "uplevel", "upvar", "catch", and the "code-is-data"
notion combine to make adding new control structures as easy
in Tcl as it is in Lisp and Scheme.

(Of course my personal favorite is Haskell -- higher-order
functions, lazy evaluation, and user-definable infix operators
make a neat combination too when it comes to defining new
control structures.)


--Joe English

  ········@flightlab.com
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <377B28D7.773C@mailserver.hursley.ibm.com>
Lets leave these good Python people alone and move to comp.lang.lisp.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <377B454C.6F59@mailserver.hursley.ibm.com>
Marco Antoniotti wrote:
> 
> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> 
> > Marco Antoniotti wrote:
> > >
> > > Nope. *best* is the Common Lisp Macro system, with Scheme/Dylan syntax
> > > stuff a second best.  The notion of 'upvar' in Tcl makes my head
> >
> > I am surprised at this because Tcl's [upvar] is simply an explicit form
> > of dynamic binding which I seem to remember is what Lisp uses. (That

Mistake, I think I meant dynamic scoping.

> > is how the 'let' function can be (is) implemented as a lambda
> > function).
> 
> Ahem!  LET is essentially a macro built on top of LAMBDA application.

I know that.

> Dynamic binding is another beast altogether, which Common Lisp and
> Scheme (statically scoped languages) allow in through a back door.

If they are statically scoped how does the following work.

Look at the following code.

	(let ((a 2))
	  (let ((b 3))
	    (+ a b)))

When you expand the macro let you get something like.

	((lambda (a)
	   ((lambda (b) 
	      (+ a b)) 3)) 2)

Which is equivalent (apart from side effects) of.

	(defun inner (b)
	  (+ a b))

	(defun outer (a)
	  (inner 3))

	(outer 2)

So what is the process which associates the use of "a" inside inner
with the formal argument of outer. I 

The Tcl equivalent is

	proc inner {b} {
	    upvar 1 a a;
	    expr {$a + $b}
	}

	proc outer {a} {
	    inner 3
	}

	outer 2

> The notion of [upvar] and [uplevel] (with the optional numeric argument which allows
> you to inspect N levels of stack is truly hackish.  The utility of

As I have said before it is simply an explicit form of the Lisp mechanism which allows
the let macro to be implemented using lambda. It is very rare that N is more than 1.

> this construct is not in question.  But its use as a
> 'macro-approximating' device is questionable.
> 
> BTW. Is TCL statically or dynamically scoped?
> 

Tcl is statically scoped but you can use upvar to get access to variables
in enclosing stack frames.

> > > spin. :) Finally, AFAIU, a Tcl "macro" must run as an interpreter of
> > > the spec. A Common Lisp (Scheme) macro is compiled into regular code
> > > by read-time expansion.
> > >
> >
> > I find that Lisp macros (while a very powerful and necessary mechanism)
> > are sooo confusing. They are in essence another language inside Lisp,
> > and as such introduce inconsistencies.
> 
> You are very mistaken.  Common Lisp macros are definitively not
> another language.  They manipulate S-expressions, which are what Lisp

Lisp macros manipulate S-expressions but the macro definitions themselves
are interpreted differently to Lisp expressions.

A simple macro version of setq would convert from
	(setq symbol '(list))
to
	(set 'symbol '(list))

A macro setq is not interpreted the same way as set is because if it
was an error would occur when the Lisp interpreter tried to get the
value of the variable symbol before symbol was created.

This is where the inconsistencies come from.

> is made of.  They generate inconsistencies insofar as you are careless
> when dealing with possible name capture problems. GENSYM and GENTEMP
> are there to help you.
> 
> > Tcl on the other hand doesn't
> > need a macro language and as such is much more consistent than Lisp.
> 
> You severely underestimate the power of Common Lisp (Scheme and Dylan)
> macro systems and ascribe to them inconsistencies which are not

I don't underestimate the power of Lisp macros I am just saying that I
find the way they warp the otherwise simple Lisp syntax / semantics.

Lisp does not really need macros, they are really just a way of
creating short cuts.

> there.  It is true that Tcl does not need a macro system. But mostly
> because, since you have Common Lisp, you do not need Tcl altogether :)

Not a particularly effective argument.

> Apart from some good ole flaming, why doesn't Tcl need a Lisp style
> macro system? Just because you have [eval]?  Not a very good
> argument. Especially when brought up in front of a Lisp audience.
> 

Tcl does not need a macro system because it has a simple consistent
syntax / semantics and all commands are treated equally and it is
possible to change the behaviour of existing commands.

> >
> > If I needed a macro language in Tcl I can just write one.
> 
> I just recieved an email from Cameron Laird, citing a 'procm' form in
> Tcl, which is supposed to do 'scan time' macros - probably something
> in line with the real thing. However, the manual pages for 8.1 at Scriptics
> does not mention it.  Yet, I suppose that it is still an experimental
> feature that maybe will appear in Tcl in a later edition - a few
> lustres after Lisp had them :)
> 

I would say that that is probably something Cameron has created. Despite
what you think Tcl is not crying out for a Lisp like macro system. What
it is crying out is for more data types and it will soon be getting them.
Including a 'proper' [lambda] implementation.

I would say that Tcl (without macro system) can do anything that Lisp
(with macro system) can do.

> > Lisp and Tcl have a lot more in common than Lispers seem to want to
> > acknowledge.
> 
> Ahem!  Do you remember the "revised" Tcl paper by Ousterhout (sorry
> for the spelling mistakes :) ), where the essential
> addition/correction was the mention of the L-word? :)
> 

No I don't.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Fernando Mato Mira
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <377B593B.F7F7504B@iname.com>
OK. Let's dispel Lisp mith #4 and then
let's get this out of comp.lang.python
[economics are different than other groups,
 BEWARE of Netscape's scrollbar!]

Paul Duffin wrote:

> > > I am surprised at this because Tcl's [upvar] is simply an explicit form
> > > of dynamic binding which I seem to remember is what Lisp uses. (That
>
> Mistake, I think I meant dynamic scoping.

>         (defun inner (b)
>           (+ a b))
>
>         (defun outer (a)
>           (inner 3))
>
>         (outer 2)

>
> A simple macro version of setq would convert from
>         (setq symbol '(list))
> to
>         (set 'symbol '(list))

All 3 the same instance of the first misconception.
That's OLD, BROKEN Lisp!!
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <377B8CFE.6113@mailserver.hursley.ibm.com>
Fernando Mato Mira wrote:
> 
> OK. Let's dispel Lisp mith #4 and then
> let's get this out of comp.lang.python
> [economics are different than other groups,
>  BEWARE of Netscape's scrollbar!]
> 
> Paul Duffin wrote:
> 
> > > > I am surprised at this because Tcl's [upvar] is simply an explicit form
> > > > of dynamic binding which I seem to remember is what Lisp uses. (That
> >
> > Mistake, I think I meant dynamic scoping.
> 
> >         (defun inner (b)
> >           (+ a b))
> >
> >         (defun outer (a)
> >           (inner 3))
> >
> >         (outer 2)
> 
> >
> > A simple macro version of setq would convert from
> >         (setq symbol '(list))
> > to
> >         (set 'symbol '(list))
> 
> All 3 the same instance of the first misconception.
> That's OLD, BROKEN Lisp!!

Please enlighten me as to what my misconception is. I am obviously not
a Lisp expert and I realise that there are plenty of different flavours
around but the above code works inside Emacs which says that it is
based on Common Lisp.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Gareth McCaughan
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <86pv2cwco4.fsf@g.pet.cam.ac.uk>
Paul Duffin wrote:

>>> Mistake, I think I meant dynamic scoping.
>> 
>>> (defun inner (b)
>>> (+ a b))
>>> 
>>> (defun outer (a)
>>> (inner 3))
>>> 
>>> (outer 2)
>> 
>>> 
>>> A simple macro version of setq would convert from
>>> (setq symbol '(list))
>>> to
>>> (set 'symbol '(list))
>> 
>> All 3 the same instance of the first misconception.
>> That's OLD, BROKEN Lisp!!
> 
> Please enlighten me as to what my misconception is. I am obviously not
> a Lisp expert and I realise that there are plenty of different flavours
> around but the above code works inside Emacs which says that it is
> based on Common Lisp.

Emacs is not based on Common Lisp. Common Lisp is not dynamically
scoped (though you can request specific variables to be scoped
dynamically).

The other main modern Lisp dialect, namely Scheme, is also not
dynamically scoped.

Old versions of Lisp were dynamically scoped because the first
Lisps were interpreted only and it's easier to do dynamic scoping
than lexical scoping in an interpreter. But with dynamic scoping
it's hard to get really good compiled code, and modularity is
harder too (because any function you call can mess with your
variables). So recent Lisps have switched to lexical scoping.

Emacs Lisp, however, is not a recent Lisp.

-- 
Gareth McCaughan            Dept. of Pure Mathematics & Math. Statistics,
················@pobox.com  Cambridge University, England.
From: Eugene Leitl
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <14204.686.875737.910962@lrz.de>
Gareth McCaughan writes:

 > Emacs Lisp, however, is not a recent Lisp.

There are plans to convert XEmacs to use a Scheme engine instead of elisp.
From: forcer
Subject: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <hhso77ff3f.fsf@forcix.roof.lan>
Eugene Leitl <············@lrz.uni-muenchen.de> writes:

> Gareth McCaughan writes:
> 
>  > Emacs Lisp, however, is not a recent Lisp.
> 
> There are plans to convert XEmacs to use a Scheme engine instead of elisp.

Actually, RMS plans to change GNU Emacs to Guile.
If everything works as planned, Guile will be able to read elisp
sources as well, so old code will still work.
	-forcer

-- 
((email . ·······@mindless.com")       (www . "http://forcix.cx/")
 (irc   . ·······@#StarWars (IRCnet)") (gpg . "/other/forcer.gpg"))
From: Friedrich Dominicus
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <377EF792.1D3EA1F2@inka.de>
forcer wrote:
> 
> Eugene Leitl <············@lrz.uni-muenchen.de> writes:
> 
> > Gareth McCaughan writes:
> >
> >  > Emacs Lisp, however, is not a recent Lisp.
> >
> > There are plans to convert XEmacs to use a Scheme engine instead of elisp.
> 
> Actually, RMS plans to change GNU Emacs to Guile.
> If everything works as planned, Guile will be able to read elisp
> sources as well, so old code will still work.

From where do you get that information?

Regards
Friedrich
From: Craig Brozefsky
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <87d7y8vrc9.fsf@duomo.pukka.org>
Friedrich Dominicus <···················@inka.de> writes:

> > Actually, RMS plans to change GNU Emacs to Guile.
> > If everything works as planned, Guile will be able to read elisp
> > sources as well, so old code will still work.
> 
> From where do you get that information?

GUILE, as it's name indicates (GNU's Ubiquitous Intelligent
Language for Extension) is the proposed extension language for all GNU
programs calling for extensability.  Emacs fits int hat category

The unofficial page at http://www.red-brean.com/guile explains this a
bit.  Guile is not at a point where it can read/write elisp, but that
is the goal.  Presently it's a fairly useful scheme implementation
about one or two release cycles away from competing with some of the
sweeter scheme environments.

-- 
Craig Brozefsky                         <·····@red-bean.com>
Free Scheme/Lisp Software     http://www.red-bean.com/~craig
I say woe unto those who are wise in their own eyes, and yet
imprudent in 'dem outside                            -Sizzla
From: David Fox
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <lu1zeo6hg2.fsf@pipeline.ucsd.edu>
Friedrich Dominicus <···················@inka.de> writes:

> forcer wrote:
> > 
> > Eugene Leitl <············@lrz.uni-muenchen.de> writes:
> > 
> > > Gareth McCaughan writes:
> > >
> > >  > Emacs Lisp, however, is not a recent Lisp.
> > >
> > > There are plans to convert XEmacs to use a Scheme engine instead of elisp.
> > 
> > Actually, RMS plans to change GNU Emacs to Guile.
> > If everything works as planned, Guile will be able to read elisp
> > sources as well, so old code will still work.
> 
> From where do you get that information?

If you go to www.google.com and search for "emacs lisp guile" the
second document in the resulting list discusses it.
-- 
David Fox           http://hci.ucsd.edu/dsf             xoF divaD
UCSD HCI Lab                                         baL ICH DSCU
From: Reini Urban
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <377f7063.191825480@judy.x-ray.local>
forcer <······@mindless.com> wrote:
>Actually, RMS plans to change GNU Emacs to Guile.
>If everything works as planned, Guile will be able to read elisp
>sources as well, so old code will still work.

not really.
elisp has the same problem as AutoLISP with its dynamic scoping and the
huge base of old legacy code relying on that.

the current efforts only try to overcome the #f/nil problem in boolean
clauses, but i saw no effort to overcome the funarg and other
Lisp1/Lisp2 problems, e.g. how to find out which variables could be
converted to lexically scoped ones and which must stay dynamic or
"special".
those lisps interact with the user a lot. forcing closures or even
not-mutable primitives might be serious limitation for existing code.

either all the elisp or autolisp code must be converted to the new
language or it's done internally (AutoLISP currently does this), but
this leads to some other serious limitations.

not to speak about the inherent scheme problem, one namespace only for
variable and function names and macros.
--
Reini Urban
http://xarch.tu-graz.ac.at/autocad/news/faq/autolisp.html
From: Michael Sperber [Mr. Preprocessor]
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <y9lg1348lms.fsf@brabantio.informatik.uni-tuebingen.de>
>>>>> "RU" == Reini Urban <······@xarch.tu-graz.ac.at> writes:

RU> the current efforts only try to overcome the #f/nil problem in boolean
RU> clauses, but i saw no effort to overcome the funarg and other
RU> Lisp1/Lisp2 problems, e.g. how to find out which variables could be
RU> converted to lexically scoped ones and which must stay dynamic or
RU> "special".
RU> those lisps interact with the user a lot. forcing closures or even
RU> not-mutable primitives might be serious limitation for existing code.

RU> either all the elisp or autolisp code must be converted to the new
RU> language or it's done internally (AutoLISP currently does this), but
RU> this leads to some other serious limitations.

RU> not to speak about the inherent scheme problem, one namespace only for
RU> variable and function names and macros.

I'm leading a student project in the context of the XEmacs project to
tackle all of these problems.  We have a prototype Elisp->Scheme
converter which uses constraint-based flow analysis to handle these
things.  So far, it's only a feasibility study, but things look pretty
good in many respects.  The prototype converts Elisp into reasonably
idiomatic Scheme.

It *is* a hard problem though, and the Guile folks seem nowhere close
to solving it.

-- 
Cheers =8-} Mike
Friede, Völkerverständigung und überhaupt blabla
From: Hrvoje Niksic
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <87lncwjezq.fsf@pc-hrvoje.srce.hr>
·······@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]) writes:

> I'm leading a student project in the context of the XEmacs project
> to tackle all of these problems.

I've always wondered how well such a thing would work out in
practice.  Obviously, Elisp would have to work perfectly in order for
the old code to work.  However, what would in that case be the
preferred language?  Would pressing C-j in the scratch buffer evaluate 
Scheme (CL) or Elisp code?
From: Michael Sperber [Mr. Preprocessor]
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <y9l4sjj8tij.fsf@brabantio.informatik.uni-tuebingen.de>
>>>>> "Hrvoje" == Hrvoje Niksic <·······@srce.hr> writes:

Hrvoje> ·······@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]) writes:

>> I'm leading a student project in the context of the XEmacs project
>> to tackle all of these problems.

Hrvoje> I've always wondered how well such a thing would work out in
Hrvoje> practice.  Obviously, Elisp would have to work perfectly in order for
Hrvoje> the old code to work.  

There are two answers for this, depending on what you want:

a) You can make Elisp merely *run* within the new substracte.  This is
   comparatively easy to do, and the MIT Scheme folks did this long
   ago.  They actually had an Elisp Gnus running within Edwin, their
   editor.  I could dig up a reference to Matt Birkholz's thesis if
   anyone's interested.

b) You want to migrate your Elisp program to the new substrate, in
   which case you want a translation which produces code as idiomatic
   as possible.  This is what we're doing.  Much harder, but doable.

Hrvoje> However, what would in that case be the preferred language?
Hrvoje> Would pressing C-j in the scratch buffer evaluate Scheme (CL)
Hrvoje> or Elisp code?

Depends on whether it's a Scheme (CL) *scratch* buffer or an Elisp one 
... :-)  You'd just have two modes.

-- 
Cheers =8-} Chipsy
Friede, Völkerverständigung und überhaupt blabla
From: Reini Urban
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <3780dcc0.19038025@judy.x-ray.local>
·······@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor])
wrote:
>>>>>> "RU" =3D=3D Reini Urban <······@xarch.tu-graz.ac.at> writes:
>RU> the current efforts only try to overcome the #f/nil problem in boolean
>RU> clauses, but i saw no effort to overcome the funarg and other
>RU> Lisp1/Lisp2 problems, e.g. how to find out which variables could be
>RU> converted to lexically scoped ones and which must stay dynamic or
>RU> "special".
>RU> those lisps interact with the user a lot. forcing closures or even
>RU> not-mutable primitives might be serious limitation for existing code.
>
>RU> either all the elisp or autolisp code must be converted to the new
>RU> language or it's done internally (AutoLISP currently does this), but
>RU> this leads to some other serious limitations.
>
>RU> not to speak about the inherent scheme problem, one namespace only fo=
>r
>RU> variable and function names and macros.
>
>I'm leading a student project in the context of the XEmacs project to
>tackle all of these problems.  We have a prototype Elisp->Scheme
>converter which uses constraint-based flow analysis to handle these
>things.  So far, it's only a feasibility study, but things look pretty
>good in many respects.  The prototype converts Elisp into reasonably
>idiomatic Scheme.
>
>It *is* a hard problem though, and the Guile folks seem nowhere close
>to solving it.

hmm, wouldn't it be easier to define a new procedure to optionally
declare some or all innner variables static?
something like LEXLET / DEFCLOSURE / FUNCTION LAMBDA 
instead of     LET    / DEFUN      / QUOTE LAMBDA?
sorry, if such ways already exist in elisp, i don't know it enough.

in old elisp LEXLET would be the same as a dynamic LET. it could search
for possible conflicts on compilation and issue a compatibility warning.

so it's just an performance hack if it can be done. (most code will be
improved by this)

or you could switch between the two constructs at run-time, one using
the new closure feature with static vars, the old doing it a bit more
complicated, as it was done before. (callbacks, iterators, the eieio
object system)
the new name wouldn't destroy existing code and the user is forced to
think about it and write workarounds.

i don't think that it will be possible to solve it completely just by
simulating the code flow. => "halting problem"
so it should be the user to take up the new features.

it is quite easy to mimic dynamic scoping rules in a lexical lisp. as I
said before, autolisp/visual lisp went this way. it's a mess to deal
with a global symbol space but with the proposed efficiency hacks and
e.g. by forbidding primitive redefinitions it can be made much faster.
a second original evaluator could be provided to run the underlying
lexical scheme. (in Autolisp we might get a Lex and an Autolisp console)
(RMS wouldn't like that)

#f/nil is really easy compared to that and already solved e.g. by the
norvig scheme interpreter. (but the other way round)
--
Reini Urban
http://xarch.tu-graz.ac.at/autocad/news/faq/autolisp.html
From: Ken Raeburn
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <tx1emij33yq.fsf@raeburn.org>
> Eugene Leitl <············@lrz.uni-muenchen.de> writes:
> > There are plans to convert XEmacs to use a Scheme engine instead of elisp.

Interesting.  I'd like to know more.

forcer <······@mindless.com> writes:
> Actually, RMS plans to change GNU Emacs to Guile.

I've started working on such a project.  It's in the early stages yet.

> If everything works as planned, Guile will be able to read elisp
> sources as well, so old code will still work.

That's the goal... a version of Emacs that can't use old .el/.elc
files is not practical.

Ken
From: Ray Blaak
Subject: Re: Emacs Lisp
Date: 
Message-ID: <ur9mjunpf.fsf_-_@infomatch.com>
Ken Raeburn <·······@raeburn.org> writes:
> forcer <······@mindless.com> writes:
> > Actually, RMS plans to change GNU Emacs to Guile.
> > If everything works as planned, Guile will be able to read elisp
> > sources as well, so old code will still work.

I am wondering why the effort with mapping #f <--> ()/nil is even necessary,
at least on the Guile side. Doesn't r4rs permit () to be considered as #f?
Having nil as a predefined symbol in Guile doesn't seem like a big evil, and
it certainly makes things easier if it is defined on both sides.

The real problem is the dynamic scoping of Elisp. Is it even possible in
theory to automatically translate Elisp to Guile? A common practice in Elisp
is this kind of thing:

(defun my-improved-version-of-foo ()
   (let ((foo-special-flag nil)) ;; Override this private variable of foo.
      (some-sort-of-setup)
      (foo)
      (some-sort-of-cleanup)))

Or what about:

(defvar symbol 'value)

(defun analyze-this (func)
  (let ((symbol 'another-value))
    (funcall func symbol)))

;; Should all return 'another-value
(analyze-this (lambda (s) symbol)) ;; access dynamically overridden version
(analyze-this (lambda (s) s)) ;; accesses local version
(analyze-this ;; access dynamically overridden version
  (lambda (s) 
	(let ((f (eval '(lambda () symbol))))
	  (funcall f))))

So how would the code of analyze-this be transformed into Guile? What about a
compiled version of analyze-this? Should symbol be used lexically or
dynamically? It depends on the function argument, and one cannot analyze all
uses of it, since future uses can pass in different values. Even if one
restricted the analysis to existing uses, the third example shows that
references to symbol can be constructed dynamically.

Halting problem indeed.

Wouldn't it be simpler to simply have both an Elisp engine and a Guile engine
in Emacs and leave it at that?

Or will Guile have optional dynamic scoping? If so, then one can just wrap
invocations Elisp code with dynamic scoping enabled. Of course, this is also
effectively having both engines in place.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.
From: Ken Raeburn
Subject: Re: Emacs Lisp
Date: 
Message-ID: <tx1908q3lrs.fsf@raeburn.org>
Ray Blaak <·····@infomatch.com> writes:
> I am wondering why the effort with mapping #f <--> ()/nil is even necessary,
> at least on the Guile side. Doesn't r4rs permit () to be considered as #f?
> Having nil as a predefined symbol in Guile doesn't seem like a big evil, and
> it certainly makes things easier if it is defined on both sides.

I think r4rs permits it, but Guile doesn't do it, and r5rs (which
Guile should eventually conform to) doesn't permit it.  Correct me if
I'm wrong; I'm no Scheme guru.  Which is one reason, actually, that
I've just asked on the Guile list for help with this project.  If
you're interested in coding, let me know.

> The real problem is the dynamic scoping of Elisp. Is it even possible in
> theory to automatically translate Elisp to Guile? A common practice in Elisp
> is this kind of thing:

Yes.  I see two simple (non-clever) approaches:

1. Dynamic bindings are implemented by writing to globals, and
   restoring old values when leaving scope.
2. A Lisp environment stack is maintained, and used to perform
   lookups when in Lisp context; Scheme is unaffected unless the
   Scheme code explicitly looks up the current Lisp binding.

We're probably going to need to be at least a little bit clever,
especially when we get to thread support.

As I indicated, this project is at an early phase.  Some initial work
has been done on translation, but I'm still using the Lisp evaluator
at the moment.

> So how would the code of analyze-this be transformed into Guile?

Carefully. :-)

In fact, I think I'll hang on to your example to test with.

>	 What about a
> compiled version of analyze-this? Should symbol be used lexically or
> dynamically?

Shouldn't be any different from the non-compiled version.

>	 It depends on the function argument, and one cannot analyze all
> uses of it, since future uses can pass in different values. Even if one
> restricted the analysis to existing uses, the third example shows that
> references to symbol can be constructed dynamically.

Yes, the translation needs to be complete, and uses of "eval" make
analysis more difficult.  So does "intern".  In the Lisp context,
"eval" will need to be a procedure that does the translation.


> Wouldn't it be simpler to simply have both an Elisp engine and a Guile engine
> in Emacs and leave it at that?

I'm close to that point now, actually.  The data sharing still needs
work, since (for example) Elisp strings have text properties and
multibyte-string support that Guile strings don't have.

But the goals of the Guile project include having it be the primary
extension languange for FSF projects, and to be able to translate
other extension languages, to support input files in multiple
languages with only one extension package.  Translating Elisp and
eliminating the Lisp engine is "merely" an instance of the latter.

Ken
From: Rob Warnock
Subject: Re: Emacs Lisp
Date: 
Message-ID: <7m37ha$7es8l@fido.engr.sgi.com>
Ken Raeburn  <·······@raeburn.org> wrote:
+---------------
| Ray Blaak <·····@infomatch.com> writes:
| > I am wondering why the effort with mapping #f <--> ()/nil is even necessary
| > at least on the Guile side. Doesn't r4rs permit () to be considered as #f?
| 
| I think r4rs permits it, but Guile doesn't do it, and r5rs (which
| Guile should eventually conform to) doesn't permit it.
+---------------

Also important is that the IEEE Standard for Scheme (IEEE 1178-1990)
explicitly requires all three of #f, (), and nil to be distinct.


-Rob

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Christopher Browne
Subject: Re: Emacs Lisp
Date: 
Message-ID: <qQbh3.12929$AU3.335317@news2.giganews.com>
On 08 Jul 1999 16:54:31 -0400, Ken Raeburn <·······@raeburn.org> wrote:
>Ray Blaak <·····@infomatch.com> writes:
>> Wouldn't it be simpler to simply have both an Elisp engine and a 
>> Guile engine in Emacs and leave it at that?
>
>I'm close to that point now, actually.  The data sharing still needs
>work, since (for example) Elisp strings have text properties and
>multibyte-string support that Guile strings don't have.
>
>But the goals of the Guile project include having it be the primary
>extension languange for FSF projects, and to be able to translate
>other extension languages, to support input files in multiple
>languages with only one extension package.  Translating Elisp and
>eliminating the Lisp engine is "merely" an instance of the latter.

An advantage to having multiple engines is that that critical CS
number of "more than one," which fairly readily generalizes to "So,
how many did you want?"

"The Thing That Is Annoying About Emacs" is that it is effectively
single-threaded, so that if, for instance, GNUS goes off and starts
doing stuff, everything else is blocked from happening.

An approach that I'd regard as "neat" (and which isn't forcibly
necessarily the Right Way to implement) would be for Emacs to take on
a "microkernel" architecture, where there would be a central process
that controls the session, and then spawns "servers" as needed to
support buffers/windows/internal tasks.

It would very much make sense to "spawn" a server specifically to
support the back end handling of GNUS, for instance.

I suspect that this would be easier than debugging a threads
implementation...
-- 
:FATAL ERROR -- ERROR IN USER
········@ntlug.org- <http://www.hex.net/~cbbrowne/wpeditor.html>
From: Ken Raeburn
Subject: Re: Emacs Lisp
Date: 
Message-ID: <tx1hfn7issw.fsf@raeburn.org>
········@news.hex.net (Christopher Browne) writes:

> "The Thing That Is Annoying About Emacs" is that it is effectively
> single-threaded, so that if, for instance, GNUS goes off and starts
> doing stuff, everything else is blocked from happening.

You've hit on the exact thing that got me interested in this in the
first place.  However, the UI issues are non-trivial, and this project
has enough stuff left to do that I'm not thinking about them right
now.  Low-level thread issues, a little, but higher-level user
interaction stuff, not at all.

> An approach that I'd regard as "neat" (and which isn't forcibly
> necessarily the Right Way to implement) would be for Emacs to take on
> a "microkernel" architecture, where there would be a central process
> that controls the session, and then spawns "servers" as needed to
> support buffers/windows/internal tasks.
> 
> It would very much make sense to "spawn" a server specifically to
> support the back end handling of GNUS, for instance.

Only one?  I want two nntp servers and a pop server contacted
simultaneously....
From: Christopher Browne
Subject: Re: Emacs Lisp
Date: 
Message-ID: <3kbj3.37469$AU3.735796@news2.giganews.com>
On 14 Jul 1999 03:39:27 -0400, Ken Raeburn <·······@raeburn.org> wrote:
>········@news.hex.net (Christopher Browne) writes:
>
>> "The Thing That Is Annoying About Emacs" is that it is effectively
>> single-threaded, so that if, for instance, GNUS goes off and starts
>> doing stuff, everything else is blocked from happening.
>
>You've hit on the exact thing that got me interested in this in the
>first place.  However, the UI issues are non-trivial, and this project
>has enough stuff left to do that I'm not thinking about them right
>now.  Low-level thread issues, a little, but higher-level user
>interaction stuff, not at all.

Making sure that things that need to synchronize do so properly is A
Problem.

>> An approach that I'd regard as "neat" (and which isn't forcibly
>> necessarily the Right Way to implement) would be for Emacs to take on
>> a "microkernel" architecture, where there would be a central process
>> that controls the session, and then spawns "servers" as needed to
>> support buffers/windows/internal tasks.
>> 
>> It would very much make sense to "spawn" a server specifically to
>> support the back end handling of GNUS, for instance.
>
>Only one?  I want two nntp servers and a pop server contacted
>simultaneously....

That's a synchronization issue as mentioned above.

If the Spawning-GNUS "back end" were architected so as to allow one to
have three "feeder" processes providing data to the 'virtual news
spool,' then that would doubtless permit one to have as many "Guile
servers" as one wanted.  

If there was some critical resource to which only serial access were
permitted, then it might only be practical to have one "Guile server"
attached for this purpose.
-- 
Rules of the Evil Overlord #34. "If my supreme command center comes
under attack, I will immediately flee to safety in my prepared escape
pod and direct the defenses from there. I will not wait until the
troops break into my inner sanctum to attempt this." 
········@ntlug.org- <http://www.hex.net/~cbbrowne/ipnntp.html>
From: Ray Blaak
Subject: Re: Emacs Lisp
Date: 
Message-ID: <m3iu7m1pff.fsf@vault83.infomatch.bc.ca>
········@news.hex.net (Christopher Browne) writes:
> If there was some critical resource to which only serial access were
> permitted, then it might only be practical to have one "Guile server"
> attached for this purpose.

Well, back to Elisp and dynamic scoping, dynamically scoped symbols imply
serialized access to all symbols. Two threads overriding the same symbol with a
let, for example, effectively set the same global value at the potentially the
same time. This is hard to avoid, because there is really no such thing as
local variables in Elisp (lexical-let excepted, but that's really a hack).

A lexically-scoped Guile is what is needed for Emacs to really rock with
multi-threading.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.
From: Barry Margolin
Subject: Re: Emacs Lisp
Date: 
Message-ID: <nQfj3.1360$KM3.358422@burlma1-snr2>
In article <··············@vault83.infomatch.bc.ca>,
Ray Blaak  <·····@infomatch.com> wrote:
>Well, back to Elisp and dynamic scoping, dynamically scoped symbols imply
>serialized access to all symbols. Two threads overriding the same symbol with a
>let, for example, effectively set the same global value at the potentially the
>same time. This is hard to avoid, because there is really no such thing as
>local variables in Elisp (lexical-let excepted, but that's really a hack).

This depends on the implementation.  If you implement dynamic variables
using deep binding, you can have a separate binding stack in each thread.
Lisp Machines automatically unbound and bound special variables whenever
doing a thread switch.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Ray Blaak
Subject: Re: Emacs Lisp
Date: 
Message-ID: <m3oghm2rl2.fsf@ns28.infomatch.bc.ca>
Ken Raeburn <·······@raeburn.org> writes:
> I've just asked on the Guile list for help with this project.  If
> you're interested in coding, let me know.

It would be fun, but I have no time. If I was single, childless and unemployed,
then just maybe.

> the goals of the Guile project include having it be the primary extension
> languange for FSF projects, and to be able to translate other extension
> languages, to support input files in multiple languages with only one
> extension package.  Translating Elisp and eliminating the Lisp engine is
> "merely" an instance of the latter.

If the primary language is missing a "primitive" feature of the secondary
language, then that feature must be implemented. For example, implementing lisp
in C is not a matter of translation but of writing a scheme virtual machine,
especially with regards to garbage collection.

Dynamic scoping is a fundamental feature of Elisp. *All* (most?) symbols are
looked up dynamically. I suspect that translation cannot be proven to be safely
correct. If one wants only a Guile engine in Emacs, and all other languages are
handled by extension packages, then the current Elisp implementation in C will
just be changed to an Elisp implementation in Guile (which is not necessarily a
bad thing).

I would seriously consider giving Guile a superset of features so that it can
handle multiple languages more easily, e.g., dynamic scoping, and static
name-spaces. Fortunately for Guile, Scheme is already pretty powerful and
elegant. It is just that here we have one case where there is much work to be
done to translate the secondary language into the primary one.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.
From: Marco Antoniotti
Subject: Re: Emacs Lisp
Date: 
Message-ID: <lwaet6ktny.fsf@copernico.parades.rm.cnr.it>
Ray Blaak <·····@infomatch.com> writes:


> I would seriously consider giving Guile a superset of features so that it can
> handle multiple languages more easily, e.g., dynamic scoping, and static
> name-spaces. Fortunately for Guile, Scheme is already pretty powerful and
> elegant. It is just that here we have one case where there is much work to be
> done to translate the secondary language into the primary one.

#define flame-bait-ahead
So, I kind of remember the discussions about Guile at the very
beginning.  The basic argument was

	CL is too big.

Now Guile seems to be approaching the 903 pages of Stroustrop's latest
C++ book :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Christian Lynbech
Subject: Re: Emacs Lisp
Date: 
Message-ID: <ofu2reneig.fsf@chl.tbit.dk>
>>>>> "Marco" == Marco Antoniotti <·······@copernico.parades.rm.cnr.it> writes:

Marco> Now Guile seems to be approaching the 903 pages of Stroustrop's
Marco> latest C++ book :)

If only guile had 900 pages of documentation, many people would be
happier :-)


---------------------------+--------------------------------------------------
Christian Lynbech          | Telebit Communications A/S                       
Fax:   +45 8628 8186       | Fabrikvej 11, DK-8260 Viby J
Phone: +45 8628 8177 + 28  | email: ···@tbit.dk --- URL: http://www.tbit.dk
---------------------------+--------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
                                        - ·······@hal.com (Michael A. Petonic)
From: Reini Urban
Subject: Re: Emacs Lisp
Date: 
Message-ID: <3785ed29.1854096@judy.x-ray.local>
Ray Blaak <·····@infomatch.com> wrote:
>The real problem is the dynamic scoping of Elisp. Is it even possible in
>theory to automatically translate Elisp to Guile?
...
>Halting problem indeed.
!

>Wouldn't it be simpler to simply have both an Elisp engine and a Guile engine
>in Emacs and leave it at that?
>
>Or will Guile have optional dynamic scoping? If so, then one can just wrap
>invocations Elisp code with dynamic scoping enabled. Of course, this is also
>effectively having both engines in place.

one single guile engine would be enough. lexical guile could emulate
dynamic elisp without any major problem.
so it can benefit from guile enhancements (threads, ffi) while keeping
backwards compatibility.

my problem is how to introduce this seperation into the existing sources
and to the user:

define new primitives for lexical mode and keep the old for dynamic
mode? (existing guile libs must be changed but existing elisp code not,
existing elisp code can be improved step by step.)

or define new primitives for dynamic mode and change the old to lexical?
(existing guile or scheme libs can stay, but existing elisp code must be
converted.)

this depends on the community. in autolisp (with similar problems) we
will probably have to go the first way, because of the unexperienced
community. the elisp community could probably take the better step.

a german project is currently building a tool to search for possible
critical parts in dynamic elisp which might cause problems if converted
adhoc to lexical variable lookup. see
<···············@brabantio.informatik.uni-tuebingen.de>

--
Reini Urban
http://xarch.tu-graz.ac.at/autocad/news/faq/autolisp.html
From: Ray Blaak
Subject: Re: Emacs Lisp
Date: 
Message-ID: <m3r9mgewhn.fsf@vault84.infomatch.bc.ca>
······@xarch.tu-graz.ac.at (Reini Urban) writes:
> my problem is how to introduce this seperation into the existing sources
> and to the user:
> 
> define new primitives for lexical mode and keep the old for dynamic
> mode? (existing guile libs must be changed but existing elisp code not,
> existing elisp code can be improved step by step.)

Well if Guile is supposed to be Scheme, then having the old primitives be
dynamic make it no longer be Scheme. Wouldn't some conversion of elisp have to
be done anyways, due to the whole business of an elisp symbol having a data
value and a function value?

> or define new primitives for dynamic mode and change the old to lexical?
> (existing guile or scheme libs can stay, but existing elisp code must be
> converted.)

But there are mountains and mountains of elisp code. Can one really statically
analyze dynamically scoped code in general, especially considering functional
arguments? Emacs is rampant with hook functions, for example. One cannot
analyze modules in isolation, but only in conjunction with other running
modules. Consider this elisp snippet:

(defun my-pretty-printer (buffer)
  ;; If font-lock is running then log anything it does.
  (let ((font-lock-verbose t)) 
    (do-much-work-and-buffer-changes-here buffer)))

How does one know that font-lock-verbose is really being overridden here, as
opposed to just being locally defined? There very likely might not be a
(require 'font-lock) if this module has nothing else to do with font-lock. Does
one perform the analysis in a "typical" Emacs environment and simply look up
existing global symbols? What if the symbol being overridden comes from a
rarely used module? Or does one analyze the whole elisp suite in its entirety?

Even if one can do an approximately good job, who is going to test all the
legacy elisp code?

Why can't the primitives just stay the same in both languages, but simply have
elisp and guile run in different spaces (even if the elisp "space" is
implemented by Guile)? I would think that this is much less work.

> a german project is currently building a tool to search for possible critical
> parts in dynamic elisp which might cause problems if converted adhoc to
> lexical variable lookup. see

The main thing I am amazed by in this whole guile/elisp thread is the attempt
to statically convert elisp code into a lexically scoped language. I don't
believe it is possible, at least in the general case. I think that truly
knowing what some dynamically scoped code does is equivalent to executing it. I
would be very interested, however, if someone who thinks it is possible could
explain why, or point me in the direction where I could educate myself.

Note: I personally think that dynamic scoping is insane, and that Scheme is
pretty much the coolest language around. However, having done some fairly
intense elisp development, I have been saved by dynamic scoping, when that was
the only way I could override some existing Emacs behaviour. So, I hate it, but
grudgingly admit to its occasional usefulness.
-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.
From: Tim Bradshaw
Subject: Re: Emacs Lisp
Date: 
Message-ID: <ey3yagncpq3.fsf@lostwithiel.tfeb.org>
* Ray Blaak wrote:

> But there are mountains and mountains of elisp code. Can one really
> statically analyze dynamically scoped code in general, especially
> considering functional arguments? 

I think elisp also has some fairly obscure semantics, particularly
with buffer-local variables.  You can quite happily swap buffers in
the middle of a function and I'm really not sure what happens in those
cases. In practice it looks like you get separate bindings per buffer,
because, I guess, elisp is shallow-bound.

And as you say there is a huge mass of elisp code often not really
that well written & relying on obscure quirks of emacs.  Look how hard
it was to change keymaps to opaque types (this may not have happened
yet other than for xemacs). And if any substantial amount of that
stuff breaks it's a real pain.

I wish they'd just leave it alone actually: elisp is a kludge, but it
works, kind of like fortran.  If you want a better emacs it's
probably better just to start from scratch.

--tim
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lwaetgsfeo.fsf@copernico.parades.rm.cnr.it>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> Marco Antoniotti wrote:
> > 
> > Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> > 
> > > Marco Antoniotti wrote:
> > > >
> > > > Nope. *best* is the Common Lisp Macro system, with Scheme/Dylan syntax
> > > > stuff a second best.  The notion of 'upvar' in Tcl makes my head
> > >
> > > I am surprised at this because Tcl's [upvar] is simply an explicit form
> > > of dynamic binding which I seem to remember is what Lisp uses. (That
> 
> Mistake, I think I meant dynamic scoping.

We understood.  Of course, we understood that you are not acquainted
with modern (i.e. post 1984) Lisps. :)

> > > is how the 'let' function can be (is) implemented as a lambda
> > > function).
> > 
> > Ahem!  LET is essentially a macro built on top of LAMBDA application.
> 
> I know that.
> 
> > Dynamic binding is another beast altogether, which Common Lisp and
> > Scheme (statically scoped languages) allow in through a back door.
> 
> If they are statically scoped how does the following work.
> 
> Look at the following code.
> 
> 	(let ((a 2))
> 	  (let ((b 3))
> 	    (+ a b)))
> 
> When you expand the macro let you get something like.
> 
> 	((lambda (a)
> 	   ((lambda (b) 
> 	      (+ a b)) 3)) 2)
> 
> Which is equivalent (apart from side effects) of.
> 
> 	(defun inner (b)
> 	  (+ a b))
> 
> 	(defun outer (a)
> 	  (inner 3))
> 
> 	(outer 2)

I can only quote Dr. Evil, on this: "you just don't get it!" :)

The forms are not equivalent.  What we can see is just how pernicious
Tcl influence can be.

> So what is the process which associates the use of "a" inside inner
> with the formal argument of outer. I 
> 
> The Tcl equivalent is
> 
> 	proc inner {b} {
> 	    upvar 1 a a;
> 	    expr {$a + $b}
> 	}
> 
> 	proc outer {a} {
> 	    inner 3
> 	}
> 
> 	outer 2

Unbelievable hack due to the lack of any notion of scoping (apart from
that of 'global' scope).

> > The notion of [upvar] and [uplevel] (with the optional numeric argument which allows
> > you to inspect N levels of stack is truly hackish.  The utility of
> 
> As I have said before it is simply an explicit form of the Lisp mechanism which allows
> the let macro to be implemented using lambda. It is very rare that N
> > is more than 1.

Very well. Implement LET in Tcl and submit it here.

> 
> > this construct is not in question.  But its use as a
> > 'macro-approximating' device is questionable.
> > 
> > BTW. Is TCL statically or dynamically scoped?
> > 
> 
> Tcl is statically scoped but you can use upvar to get access to variables
> in enclosing stack frames.

If Tcl is statically scoped (a fact which has been contradicted by
another poster) how would you translate in Tcl the following piece of
code

(defvar x 34)

(defvar f (let ((x 3)) (lambda (y) (+ x y))))

(print (funcall #'f 4))

(print x)

?

	...

> > You are very mistaken.  Common Lisp macros are definitively not
> > another language.  They manipulate S-expressions, which are what Lisp
> 
> Lisp macros manipulate S-expressions but the macro definitions themselves
> are interpreted differently to Lisp expressions.
> 
> A simple macro version of setq would convert from
> 	(setq symbol '(list))
> to
> 	(set 'symbol '(list))
> 
> A macro setq is not interpreted the same way as set is because if it
> was an error would occur when the Lisp interpreter tried to get the
> value of the variable symbol before symbol was created.

Excuse me?

(defmacro setq (symbol value) ; Let's forget about redefinitions of SETQ etc.
  (list 'set (list 'quote symbol) value))

What is that you do not understand?

> This is where the inconsistencies come from.
> 
> > is made of.  They generate inconsistencies insofar as you are careless
> > when dealing with possible name capture problems. GENSYM and GENTEMP
> > are there to help you.
> > 
> > > Tcl on the other hand doesn't
> > > need a macro language and as such is much more consistent than Lisp.
> > 
> > You severely underestimate the power of Common Lisp (Scheme and Dylan)
> > macro systems and ascribe to them inconsistencies which are not
> 
> I don't underestimate the power of Lisp macros I am just saying that I
> find the way they warp the otherwise simple Lisp syntax / semantics.

The semantics of macros is just that of replacement.  Nothing more.
The data structures manipulated within macros are all the standard
Lisp ones.

> Lisp does not really need macros, they are really just a way of
> creating short cuts.

Of course.  But the Lisp macro system makes the language more powerful
than any other one when it comes to exteding the behavior of the
system or to improve compiled code performance.

> 
> > there.  It is true that Tcl does not need a macro system. But mostly
> > because, since you have Common Lisp, you do not need Tcl altogether :)
> 
> Not a particularly effective argument.

Truth is not necessarily effective :)

> > Apart from some good ole flaming, why doesn't Tcl need a Lisp style
> > macro system? Just because you have [eval]?  Not a very good
> > argument. Especially when brought up in front of a Lisp audience.
> > 
> 
> Tcl does not need a macro system because it has a simple consistent
> syntax / semantics and all commands are treated equally and it is
> possible to change the behaviour of existing commands.

What of these properties does not hold in CL without macros.  Which
one does not hold if you throw macros in the midst?

> > > If I needed a macro language in Tcl I can just write one.
> > 
> > I just recieved an email from Cameron Laird, citing a 'procm' form in
> > Tcl, which is supposed to do 'scan time' macros - probably something
> > in line with the real thing. However, the manual pages for 8.1 at Scriptics
> > does not mention it.  Yet, I suppose that it is still an experimental
> > feature that maybe will appear in Tcl in a later edition - a few
> > lustres after Lisp had them :)
> > 
> 
> I would say that that is probably something Cameron has created. Despite
> what you think Tcl is not crying out for a Lisp like macro system. What
> it is crying out is for more data types and it will soon be getting them.
> Including a 'proper' [lambda] implementation.

Which means?  What will the value of

   lambda {x} {lambda {y} {expr $x + $y}}

be? (you can make your substitution of {} with []; it is just another
confusing aspect of Tcl.

And, once you have that (assuming the semantics were the right one),
what will Tcl have added to Common Lisp of 1984.

> 
> I would say that Tcl (without macro system) can do anything that Lisp
> (with macro system) can do.

No it can't. With or without the Macro System. The above example I
wrote is just a little tiny bit.  As a proof of your argument you
should come up with a bit of Tcl that cannot be easily (where the
degree of "easily" can be agreed upon) translated into Common Lisp.

> > > Lisp and Tcl have a lot more in common than Lispers seem to want to
> > > acknowledge.
> > 
> > Ahem!  Do you remember the "revised" Tcl paper by Ousterhout (sorry
> > for the spelling mistakes :) ), where the essential
> > addition/correction was the mention of the L-word? :)
> > 
> 
> No I don't.

Well, the story goes like this.  The author of Tcl writes up a lengthy
white paper on "scripting" and interpreted languages with the intent
of explaining his choices and the structure of Tcl.  He does not
mention the L-word once.  Public uproar from us 20 zealots around the
world makes him revise the paper by including the L-word.  End of
story.

Let's not kid ourselves.  The only reason why Tcl got any attention
comes from the Tk part.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Paul Duffin
Subject: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <377C7FA0.2D85@mailserver.hursley.ibm.com>
I am finding this discussion extremely enlightening and I do appreciate
the explanations that I am receiving. Lisp certainly has changed an
awful lot since I last played with it. I did think that dynamic scoping
was such a fundemental part of the language that it could never change.

Marco Antoniotti wrote:
> 
> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> 
> >
> > Mistake, I think I meant dynamic scoping.
> 
> We understood.  Of course, we understood that you are not acquainted
> with modern (i.e. post 1984) Lisps. :)
> 

Which turns out to be the whole problem.

> > > > is how the 'let' function can be (is) implemented as a lambda
> > > > function).
> > >
> > > Ahem!  LET is essentially a macro built on top of LAMBDA application.
> >
> > I know that.
> >
> > > Dynamic binding is another beast altogether, which Common Lisp and
> > > Scheme (statically scoped languages) allow in through a back door.
> >
> > If they are statically scoped how does the following work.
> >
> > Look at the following code.
> >
> >       (let ((a 2))
> >         (let ((b 3))
> >           (+ a b)))
> >
> > When you expand the macro let you get something like.
> >
> >       ((lambda (a)
> >          ((lambda (b)
> >             (+ a b)) 3)) 2)
> >
> > Which is equivalent (apart from side effects) of.
> >
> >       (defun inner (b)
> >         (+ a b))
> >
> >       (defun outer (a)
> >         (inner 3))
> >
> >       (outer 2)
> 
> I can only quote Dr. Evil, on this: "you just don't get it!" :)
> 

I did actually test the above on Emacs and both worked identically,
at least in terms of the value returned. Elisp-itis strikes again.

> The forms are not equivalent.  What we can see is just how pernicious
> Tcl influence can be.
> 

Actually the problem is Elisp-itis.

> > So what is the process which associates the use of "a" inside inner
> > with the formal argument of outer. I
> >
> > The Tcl equivalent is
> >
> >       proc inner {b} {
> >           upvar 1 a a;
> >           expr {$a + $b}
> >       }
> >
> >       proc outer {a} {
> >           inner 3
> >       }
> >
> >       outer 2
> 
> Unbelievable hack due to the lack of any notion of scoping (apart from
> that of 'global' scope).
> 

I was just trying to emulate what I now know is 'classic' Lisp behaviour. [upvar] is
normally used for simulating the passing of references to procedures and I would
agree that it is a bit hacky. C++ uses & which could be used in Tcl to hide the
[upvar] just like in C++ it hides the *.

How does Lisp pass references ?

> > > The notion of [upvar] and [uplevel] (with the optional numeric argument which allows
> > > you to inspect N levels of stack is truly hackish.  The utility of
> >
> > As I have said before it is simply an explicit form of the Lisp mechanism which allows
> > the let macro to be implemented using lambda. It is very rare that N
> > > is more than 1.
> 
> Very well. Implement LET in Tcl and submit it here.
> 

I can implement let but it obviously will not be the same as Lisp's
let because Tcl does not have the same scoping rules as Lisp.

> >
> > > this construct is not in question.  But its use as a
> > > 'macro-approximating' device is questionable.
> > >
> > > BTW. Is TCL statically or dynamically scoped?
> > >
> >
> > Tcl is statically scoped but you can use upvar to get access to variables
> > in enclosing stack frames.
> 
> If Tcl is statically scoped (a fact which has been contradicted by
> another poster) how would you translate in Tcl the following piece of
> code
> 
> (defvar x 34)
> 
> (defvar f (let ((x 3)) (lambda (y) (+ x y))))
> 
> (print (funcall #'f 4))
> 
> (print x)
> 

From the dragon book the above is an example of 
	"Lexical Scope with Nested Procedures"
whereas Tcl has
	"Lexical Scope without Nested Procedures".

The index says for "Static Scope" see "Lexical Scope" so I am taking
them to be the same.

Lexical scope is therefore language dependent and different languages
implement different rules. So Tcl implements Tcl lexical scope which is not the
same as Lisp lexical scope.

I am asking the following questions to try and reset my brain's Lisp
module.

	The following function is supposed to return a function which has the
	effect of
		(f (g x))
	
	(defun compose (f g) (lambda (x) (f (g x)))

	Am I right in saying that when the lambda function above is created it
	is given an environment which contains variables f and g which are bound
	to the values of the formal arguments to compose ?

	If I had lambda function 1 embedded inside lambda function 2 inside lambda 
	function 3 do their environments refer to each other. e.g. 1->2->3->global ?

	Does this happen if I use defun to create the functions ?

> ?
> 
>         ...
> 
> (defmacro setq (symbol value) ; Let's forget about redefinitions of SETQ etc.
>   (list 'set (list 'quote symbol) value))
> 
> What is that you do not understand?
> 

I could be suffering from elisp-itis here again as I thought that macros had
funny uses of @ symbols. That looks remarkably like building up a Tcl command to
be [eval]ed.

> 
> The semantics of macros is just that of replacement.  Nothing more.
> The data structures manipulated within macros are all the standard
> Lisp ones.
> 

True.

Take the following

	(foo fred 1)

If I implement foo as a function then fred is a variable and foo gets a 'copy' of
its value.

If I implement it as a macro then fred can have any meaning.

The only difference between Lisp and Tcl in this regard is that Lisp needs a macro
to do its stuff and Tcl needs [upvar] and [eval].

> > Lisp does not really need macros, they are really just a way of
> > creating short cuts.
> 
> Of course.  But the Lisp macro system makes the language more powerful
> than any other one when it comes to exteding the behavior of the
> system or to improve compiled code performance.
> 
> >
> > > there.  It is true that Tcl does not need a macro system. But mostly
> > > because, since you have Common Lisp, you do not need Tcl altogether :)
> >
> > Not a particularly effective argument.
> 
> Truth is not necessarily effective :)
> 
> > > Apart from some good ole flaming, why doesn't Tcl need a Lisp style
> > > macro system? Just because you have [eval]?  Not a very good
> > > argument. Especially when brought up in front of a Lisp audience.
> > >
> >
> > Tcl does not need a macro system because it has a simple consistent
> > syntax / semantics and all commands are treated equally and it is
> > possible to change the behaviour of existing commands.
> 
> What of these properties does not hold in CL without macros.  Which
> one does not hold if you throw macros in the midst?
> 

The fact that not all 'commands' are treated equally. You have macros and you have
functions and they are treated differently which can be a source of some confusion.

> 
> Which means?  What will the value of
> 
>    lambda {x} {lambda {y} {expr $x + $y}}
> 
> be? (you can make your substitution of {} with []; it is just another
> confusing aspect of Tcl.
> 

In the implementation I have the above would fail with an error because Tcl's
scoping rules are different to Lisp.
However the following would do the same thing.

	lambda {x} {curry [lambda {x y} {expr {$x + $y}}] $x}

> And, once you have that (assuming the semantics were the right one),

The semantics are right (same value) but the syntax is different.

> what will Tcl have added to Common Lisp of 1984.
> 

Less () and more {}[] :)

Tcl does not add to Common Lisp, Tcl is Tcl.

> >
> > I would say that Tcl (without macro system) can do anything that Lisp
> > (with macro system) can do.
> 
> No it can't. With or without the Macro System. The above example I
> wrote is just a little tiny bit.  As a proof of your argument you

Countered.

> should come up with a bit of Tcl that cannot be easily (where the
> degree of "easily" can be agreed upon) translated into Common Lisp.
> 

Why ? I never said that Tcl could do more than Lisp.

> > > > Lisp and Tcl have a lot more in common than Lispers seem to want to
> > > > acknowledge.
>
> Well, the story goes like this.  The author of Tcl writes up a lengthy
> white paper on "scripting" and interpreted languages with the intent
> of explaining his choices and the structure of Tcl.  He does not
> mention the L-word once.  Public uproar from us 20 zealots around the
> world makes him revise the paper by including the L-word.  End of
> story.
> 

Alright I will rephrase.

	Lisp and Tcl have a lot more in common then SOME Lispers AND TCLERS
	seem to want to acknowledge.

> Let's not kid ourselves.  The only reason why Tcl got any attention
> comes from the Tk part.
> 

Tcl does get a lot of introductions through Tk and Expect just as Lisp
gets (got) a lot of introductions (albeit badly warped :-) ) from Emacs.
However Tcl is more than Tk, just as Lisp is more than Emacs.

Just to reiterate Lisp (except elisp) has come a long way since 
"dynamic scoping".

This reduces my comments to subjectivity which you should probably
ignore.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <nkj3dz7te9n.fsf@tfeb.org>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> How does Lisp pass references ?

Everything in Lisp is a reference, but some objects are immutable
(numbers, characters and some others).  Sort of.


> The fact that not all 'commands' are treated equally. You have macros and you have
> functions and they are treated differently which can be a source of some confusion.
> 

I don't think that's a good way of thinking about it.  Lisp has
functions and a small (fixed) number of special operators, which have
special idiosyncratic evaluation rules.  Examples of special operators
are FUNCTION (construct or return a function depending on form of
argument) and IF (evaulate first arg then either 2nd or third).
Common Lisp has 20-odd of these things (23? I forget the number),
although you can have many less (I *think* you can get away with just
LAMBDA in the sense that that is enough to be Turing equivalent?).
Each of those things has to be wired into the language itself.

In addition to this there is a very powerful source-code rewriting
system implemented in Lisp which is exposed to the user, which is
macros.  There is nothing magic about macros: given a Lisp without a
macro system you could implement your own quite easily (because Lisp
source is Lisp data).

--tim
From: Paul Duffin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <37806B57.353C@mailserver.hursley.ibm.com>
Tim Bradshaw wrote:
> 
> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> 
> > How does Lisp pass references ?
> 
> Everything in Lisp is a reference, but some objects are immutable
> (numbers, characters and some others).  Sort of.
> 
> > The fact that not all 'commands' are treated equally. You have macros and you have
> > functions and they are treated differently which can be a source of some confusion.
> >
> 
> I don't think that's a good way of thinking about it.  Lisp has
> functions and a small (fixed) number of special operators, which have
> special idiosyncratic evaluation rules.  Examples of special operators

So what you are saying is that there exists certain operators which are
treated specially by Lisp. e.g. they may break some of the following 
rules

    Unquoted symbols are not treated as variables.
	e.g. in
		(setq hello 12)
	hello is not treated as a variable whereas in
		(foo hello 12)
	where foo is a simple function hello is treated as a variable

    Unquoted lists are not evaluated.
	e.g. in
		(if (eq a b)
			(setq a 12)
		    (setq a 13))
	neither the second or third argument to if is evaluated before
	if is 'called'. The if operator itself decides which one will
	be evaluated.

The point I am making is that NO command in Tcl breaks Tcl's evaluation
rules.

> are FUNCTION (construct or return a function depending on form of
> argument) and IF (evaulate first arg then either 2nd or third).
> Common Lisp has 20-odd of these things (23? I forget the number),
> although you can have many less (I *think* you can get away with just
> LAMBDA in the sense that that is enough to be Turing equivalent?).
> Each of those things has to be wired into the language itself.
> 
> In addition to this there is a very powerful source-code rewriting
> system implemented in Lisp which is exposed to the user, which is
> macros.  There is nothing magic about macros: given a Lisp without a
> macro system you could implement your own quite easily (because Lisp
> source is Lisp data).
> 

A lot of people have been saying that one of the advantages of the Lisp
macro system is that it 'inlines' code which is compiled. Surely this
requires that macros are 'substituted' during the parsing stage and not
'evaluated' during the evaluation stage. 

Therefore an equivalent macro system could not be written in Lisp.

This is not really a criticism, more an observation (which probably will
turn out to be wrong anyway :)).

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <nkjk8sfgzu9.fsf@tfeb.org>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> So what you are saying is that there exists certain operators which are
> treated specially by Lisp. e.g. they may break some of the following 
> rules
> 
>     Unquoted symbols are not treated as variables.
> 	e.g. in
> 		(setq hello 12)
> 	hello is not treated as a variable whereas in
> 		(foo hello 12)
> 	where foo is a simple function hello is treated as a variable
> 
>     Unquoted lists are not evaluated.
> 	e.g. in
> 		(if (eq a b)
> 			(setq a 12)
> 		    (setq a 13))
> 	neither the second or third argument to if is evaluated before
> 	if is 'called'. The if operator itself decides which one will
> 	be evaluated.

Precisely.  They break other rules too, and also do `deep magic'
operations like creating functions which deal with the lexical
environment properly which you can't do otherwise.

> 
> The point I am making is that NO command in Tcl breaks Tcl's evaluation
> rules.

Well, no operator in Lisp breaks Lisp's evaluation rules, it's just
that they are defined strangely for a small set of operators.  But
seriously, does TCL not have a short-circuiting conditional operator
(I mean a conditional operator that does not evaluate its arguments if
it does not need to)?  If it does not how do you deal with this?  It
seems to me that if you have no short-circuiting conditionals in the
language you either need some equivalent of LAMBDA (or DELAY), so you
can write an eager conditional along the lines of:

	(defun eager-if (test tf ff)
	   (if test (funcall tf) (funcall ff)))

Then you write:

	(eager-if (my-test) (lambda () (explode-world)) (lambda ()))

relying on LAMBDA, which must be special (you can also define EAGER-IF
without using IF if you're willing to change your idea of what truth &
falsity are).  Alternatively you might have lazy evaluation in the
language so things like IF don't evaluate args till they need them.
Finally, you might have the completely horrible solution of giving
quoted forms to IF and calling EVAL at runtime, and really sacrificing
any hope of ever having good performance or anything because you are
now committed to a fundamentally interpreted language and also to
runtime variable lookup all over the place.

Is there another way out?

> A lot of people have been saying that one of the advantages of the Lisp
> macro system is that it 'inlines' code which is compiled. Surely this
> requires that macros are 'substituted' during the parsing stage and not
> 'evaluated' during the evaluation stage. 
> 
> Therefore an equivalent macro system could not be written in Lisp.
> 

Yes, it can easily be written in Lisp.  Source code is data remember!
Just read the source, transform it with macros and then evaluate or
compile the result.

In fact I really meant something stronger than this when I said `could
be implemented in Lisp'.  A lisp compiler can be implemented in Lisp
(and almost all of them are), but I really meant that a macro system
can be implemented in Lisp very much more simply than a compiler, and
without any knowledge of how the compiler works: you just macroexpand
then feed the result to the compiler.

Incidentally, inlining is not really approved of as a use of the macro
system now -- CL has a way of specifying inline functions which do the
same thing without the semantic problems you get with macros unless
you're careful.  Not all major implementations actually implement
user-declarable inline functions yet unfortunately.

--tim
From: Axel Schairer
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <fm9vhbzb1rk.fsf@clair.dai.ed.ac.uk>
Tim Bradshaw <···@tfeb.org> writes:
> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> > The point I am making is that NO command in Tcl breaks Tcl's evaluation
> > rules.
[snip] 
> does TCL not have a short-circuiting conditional operator
> (I mean a conditional operator that does not evaluate its arguments if
> it does not need to)?  

Of course it does.  More generally, no argument is evaluated before it
is passed to a procedure unless evaluation is explicitly requested by
the caller (by enclosing the argument in `[' and `]').  That's
probably what Paul meant by not breaking the evaluation rules.  (I
have not considered variable substitution here.)

> Finally, you might have the completely horrible solution of giving
> quoted forms to IF and calling EVAL at runtime, and really sacrificing
> any hope of ever having good performance or anything because you are
> now committed to a fundamentally interpreted language and also to
> runtime variable lookup all over the place.

This is exactly what happens.  In

	my_if {test $x} {explode $world} {shine $sun}

the procedure receives the strings (intentionally representing code)
`test $x', `explode $world' and `shine $sun' and may or may not
evaluate either (or all) of them, and it may evaluate them in strange
(lexical) environments if it so wishes (by using or not using upvar
and global).

Now, saying that `NO command in Tcl breaks Tcl's evaluation rules'
doesn't really say too much, does it?  I mean you can say the same for
every language (as in fact, Tim, you did for CL):

> Well, no operator in Lisp breaks Lisp's evaluation rules, it's just
> that they are defined strangely for a small set of operators.

So, I suppose the point Paul wants to make here is that in TCL no
argument is evaluated before being passed to a procedure; in CL there
are cases in which the arguments are evaluated and situations in which
you have to look up the documentation.

But since, e.g., the semantics of TCL's `if' depends on arguments
being evaluated inside the if-procedure, you have to look up the
documentation in TCL in the same way as you have to in CL:

	if [test $x] [explode $world] [save $world]

just is *not* what you want, whereas

	lookup [compute_hash $x] $table

is what you want (because supposedly lookup does not evaluate its
arguments, or in other words, it is a function in the CL sense).

This contrasts with

	(if (test x) (explode world) (save world))

and

	(lookup (compute-hash x) table)

where in both cases this is what you want.

My opinion here is: There seems to be a superficial uniformity in the
way TCL evaluates expressions, but the uniformity forces programmers
to work around the uniformity manually to get the effect they normally
want to get.  So I actually think that this uniformity is not an
advantage at all.  Or put in other words, if in CL you call (or, for
that matter, define) a function you know that the arguments are
evaluated in the lexical environment in which they occur textually.
This useful uniformity is lost by the more lowlevel uniformity of TCL
where, no matter which procedure you call, you have to know the
internal evaluation strategy of the specific procedure.

So in CL you have a difference between functions, macros, and special
operators, but this difference is a useful one.  And it so happens
that if you do not mess around with macros where you don't really want
them the CL code you get appears to be more uniform to me than the
code resulting from TCL's uniformity (see the examples above).

Does this make sense?

Axel
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <ey31zendjfu.fsf@lostwithiel.tfeb.org>
* Axel Schairer wrote:

[Does TCL have a short-circuiting IF?]

> Of course it does.  More generally, no argument is evaluated before it
> is passed to a procedure unless evaluation is explicitly requested by
> the caller (by enclosing the argument in `[' and `]').  That's
> probably what Paul meant by not breaking the evaluation rules.  (I
> have not considered variable substitution here.)

So this looks like tcl doesn't have special operators, it has special *syntax*
which causes evaluation to happen or not, namely []?

> the procedure receives the strings (intentionally representing code)
> `test $x', `explode $world' and `shine $sun' and may or may not
> evaluate either (or all) of them, and it may evaluate them in strange
> (lexical) environments if it so wishes (by using or not using upvar
> and global).

Strings?  Seriously?  So it necessarily parses at runtime?  So this
is, like, a huge step backwards from something like perl which does
actually build a parse tree and execute that.

Damn, I have just been sick all over my keyboard.

> So in CL you have a difference between functions, macros, and special
> operators, but this difference is a useful one.  And it so happens
> that if you do not mess around with macros where you don't really want
> them the CL code you get appears to be more uniform to me than the
> code resulting from TCL's uniformity (see the examples above).
> 

In tcl it looks like you need to know for every operator `does this
call eval, and if so on which of its arguments'.  I guess you could
argue that this is more regular than CL, but it must be hard to be
heard above the hysterical laughter and puking sounds.  At least in CL
you have n (< 30) special magic things which you have to know, and
then an unbounded number of macros which you can find out what they do
by calling MACROEXPAND.

--tim
From: Jean-Claude Wippler
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl  broken?)
Date: 
Message-ID: <37810FD6.6852339A@equi4.com>
Tim Bradshaw wrote:
> 
[Axel Schairer describes strings passed to Tcl commands]
> 
> Strings?  Seriously?  So it necessarily parses at runtime?  So this
> is, like, a huge step backwards from something like perl which does
> actually build a parse tree and execute that.

In an attempt to prevent mis-information from spreading too far:

Tcl caches a parsed representation, it works just like Perl and Python.
At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.

-- Jean-Claude
From: William Deakin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl  broken?)
Date: 
Message-ID: <3781B3D1.9D44EC1D@pindar.com>
Jean-Claude Wippler wrote:

> In an attempt to prevent mis-information from spreading too far:
>
> Tcl caches a parsed representation, it works just like Perl and Python.
> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.

Isnt this only a recent thing? I though compilation was one of the BIG
things that happened in
v8.x of tcl.

Best Regards,

Will
From: Cameron Laird
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl  broken?)
Date: 
Message-ID: <7lsqio$d59$1@Starbase.NeoSoft.COM>
In article <·················@pindar.com>,
William Deakin  <········@pindar.com> wrote:
>
>Jean-Claude Wippler wrote:
>
>> In an attempt to prevent mis-information from spreading too far:
>>
>> Tcl caches a parsed representation, it works just like Perl and Python.
>> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.
>
>Isnt this only a recent thing? I though compilation was one of the BIG
>things that happened in
>v8.x of tcl.
			.
			.
			.
Yes.  To repeat what Paul Duffin has written
nearby:  the semantics have remained unchanged.
Caching compilation has boosted performance.
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: William Deakin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl  broken?)
Date: 
Message-ID: <3782032A.799C1039@pindar.com>
I sorry but I sent this message (by accident) directly to Cameron Laird. My
apologies :-)

Cameron Laird wrote:

> In article <·················@pindar.com>,
> William Deakin  <········@pindar.com> wrote:
> >
> >Jean-Claude Wippler wrote:
> >
> >> In an attempt to prevent mis-information from spreading too far:
> >>
> >> Tcl caches a parsed representation, it works just like Perl and Python.
> >> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.
> >
> >Isnt this only a recent thing? I though compilation was one of the BIG
> >things that happened in
> >v8.x of tcl.
>                         .
>                         .
>                         .
> Yes.  To repeat what Paul Duffin has written
> nearby:  the semantics have remained unchanged.
> Caching compilation has boosted performance.
> --
>
> Cameron Laird           http://starbase.neosoft.com/~claird/home.html
> ······@NeoSoft.com      +1 281 996 8546 FAX

Dear Sir,

I had not followed the whole of the thread. My mail reader/reading was a bit
behind although it seems to me that I merely reiterated the point made by Paul

Duffin. And that these points appear to have been made within ~5 minutes of
each other by the time stamp on my news reader.

However did not mean to imply that the semantics or the way that tcl executed
was changed. I do not pretended to be an expert in tcl, but have some little
experience in perl/tk development.

I was only trying to clarify the point about tcl using a cache representation,

made by Jean-Claude Wippler, and the possible confusion that this may have
caused.

Best Regards,

Will
From: Marius Vollmer
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <87yagtfuji.fsf@zagadka.ping.de>
······@Starbase.NeoSoft.COM (Cameron Laird) writes:

> >Isnt this only a recent thing? I though compilation was one of the BIG
> >things that happened in
> >v8.x of tcl.
>
> Yes.  To repeat what Paul Duffin has written
> nearby:  the semantics have remained unchanged.

Not true.  From the changelog of Tcl8:

    10/21/96 (new feature) The core of the Tcl interpreter has been
    replaced with an on-the-fly compiler that translates Tcl scripts
    to bytecoded instructions; a new interpreter then executes the
    bytecodes. The compiler introduces only a few minor changes at the
    level of Tcl scripts. The biggest changes are to expressions and
    lists.
	- A second level of substitutions is no longer done for
	  expressions.  This substantially improves their execution
	  time. This means that the expression "$x*4" produces a
	  different result than in the past if x is
	  "$y+2". Fortunately, not much code depends on the old
	  two-level semantics. Some expressions that do, such as "expr
	  [join $list +]" can be recoded to work in Tcl8.0 by adding
	  an eval: e.g., "eval expr [join $list +]".
	- Lists are now completely parsed on the first list operation
	  to create a faster internal representation. In the past, if
	  you had a misformed list but the erroneous part was after
	  the point you inserted or extracted an element, then you
	  never saw an error.  In Tcl8.0 an error will be
	  reported. This should only effect incorrect programs that
	  took advantage of behavior of the old implementation that
	  was not documented in the man pages.
    Other changes to Tcl scripts are discussed in the web page at
    http://www.sunlabs.com/research/tcl/compiler.html. (BL)
    *** POTENTIAL INCOMPATIBILITY ***

That they consider this minor changes tell you a lot about how `well
specified' Tcl was to begin with.

- Marius
From: Paul Duffin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <378354B7.2DDA@mailserver.hursley.ibm.com>
Marius Vollmer wrote:
> 
> ······@Starbase.NeoSoft.COM (Cameron Laird) writes:
> 
> > >Isnt this only a recent thing? I though compilation was one of the BIG
> > >things that happened in
> > >v8.x of tcl.
> >
> > Yes.  To repeat what Paul Duffin has written
> > nearby:  the semantics have remained unchanged.
> 
> Not true.  From the changelog of Tcl8:
> 
>     10/21/96 (new feature) The core of the Tcl interpreter has been
>     replaced with an on-the-fly compiler that translates Tcl scripts
>     to bytecoded instructions; a new interpreter then executes the
>     bytecodes. The compiler introduces only a few minor changes at the
>     level of Tcl scripts. The biggest changes are to expressions and
>     lists.
>         - A second level of substitutions is no longer done for
>           expressions.  This substantially improves their execution
>           time. This means that the expression "$x*4" produces a
>           different result than in the past if x is
>           "$y+2". Fortunately, not much code depends on the old
>           two-level semantics. Some expressions that do, such as "expr
>           [join $list +]" can be recoded to work in Tcl8.0 by adding
>           an eval: e.g., "eval expr [join $list +]".

Actually the above change was backed out and expr has the same semantics
as 7.6.

>         - Lists are now completely parsed on the first list operation
>           to create a faster internal representation. In the past, if
>           you had a misformed list but the erroneous part was after
>           the point you inserted or extracted an element, then you
>           never saw an error.  In Tcl8.0 an error will be
>           reported. This should only effect incorrect programs that
>           took advantage of behavior of the old implementation that
>           was not documented in the man pages.

Programs which mistakenly used list operators on strings are very
fragile and would probably have broken anyway.

>     Other changes to Tcl scripts are discussed in the web page at
>     http://www.sunlabs.com/research/tcl/compiler.html. (BL)
>     *** POTENTIAL INCOMPATIBILITY ***
> 
> That they consider this minor changes tell you a lot about how `well
> specified' Tcl was to begin with.
> 

Of course Lisp has never been modified in such a way as to break 
existing scripts has it !!!!

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Marco Antoniotti
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <lwhfngijji.fsf@copernico.parades.rm.cnr.it>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> Marius Vollmer wrote:
> > 
> > That they consider this minor changes tell you a lot about how `well
> > specified' Tcl was to begin with.
> > 
> 
> Of course Lisp has never been modified in such a way as to break 
> existing scripts has it !!!!
> 

Not with a version id change.  The changes were major and reflected by
a "dialect name change".  E.g. when Common Lisp was introduced
(~1984), it was a big enough change that a new name (CL) came with it.

All other changes have been incremental (mostly).  Nothing like the
situation described earlier in this thread for Tcl.

Cheers


-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <nkjiu7w320h.fsf@tfeb.org>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> 
> Of course Lisp has never been modified in such a way as to break 
> existing scripts has it !!!!

Of course.  But those changes were typically described as `major' not
`minor'.

--tim
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <ey3zp1acj3x.fsf@lostwithiel.tfeb.org>
* Jean-Claude Wippler wrote:

> Tcl caches a parsed representation, it works just like Perl and Python.
> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.

Does tcl have a conventional parse-[compile]-execute scheme then, like
Perl?  I got the impression from the post I was responding to as well
as elsewhere that it was really all strings parsed on the fly, with
perhaps some memoisation going on to make the performance better.
Indeed they seem to make a virtue of the `everything is a string' bit.

--tim
From: William Deakin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl    broken?)
Date: 
Message-ID: <3781D738.29841997@pindar.com>
Tim Bradshaw wrote:

> * Jean-Claude Wippler wrote:
>
> Does tcl have a conventional parse-[compile]-execute scheme then, like
> Perl?  I got the impression from the post I was responding to as well
> as elsewhere that it was really all strings parsed on the fly, with
> perhaps some memoisation going on to make the performance better.
> Indeed they seem to make a virtue of the `everything is a string' bit.
>
> --tim

It was my understanding that with release 8.x that this changed. There was
talk of a 'tcl compiler'.
But please take this with a pinch of salt. When I have a moment I'll dig out
the release notes and try
and give a more definitive answer ;-)

Cheers,

Will
From: William Deakin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl    broken?)
Date: 
Message-ID: <3781DC7B.FA0DECD1@pindar.com>
Tim Bradshaw wrote:

> Does tcl have a conventional parse-[compile]-execute scheme then, like
> Perl?  I got the impression from the post I was responding to as well
> as elsewhere that it was really all strings parsed on the fly, with
> perhaps some memoisation going on to make the performance better.
> Indeed they seem to make a virtue of the `everything is a string' bit.
>
> --tim

From "http://sunscript.sun.com/TclTkCore/8.0.html"
Tcl 8.0/Tk 8.0 Release Announcement August 18, 1997

John Ousterhout
Sun Microsystems, Inc.
···············@eng.sun.com

This message is to announce the 8.0 releases of the Tcl scripting
language and the Tk toolkit.  These are major releases with many
exciting new features, including the following:

- Tcl has a new bytecode compiler that improves performance by a factor
  of 2-10x. ...

Make of this what you will.

Best Regards,

Will
From: William Deakin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl    broken?)
Date: 
Message-ID: <3781DD7D.EE51C13B@pindar.com>
Sorry :-(
that link should have been
"http://www.scriptics.com/software/relnotes/tcl8.0" and not
"http://sunscript.sun.com/TclTkCore/8.0.html"

will
From: Cameron Laird
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <7lsqtu$dbk$1@Starbase.NeoSoft.COM>
In article <···············@lostwithiel.tfeb.org>,
Tim Bradshaw  <···@tfeb.org> wrote:
>* Jean-Claude Wippler wrote:
>
>> Tcl caches a parsed representation, it works just like Perl and Python.
>> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.
>
>Does tcl have a conventional parse-[compile]-execute scheme then, like
>Perl?  I got the impression from the post I was responding to as well
>as elsewhere that it was really all strings parsed on the fly, with
>perhaps some memoisation going on to make the performance better.
>Indeed they seem to make a virtue of the `everything is a string' bit.
>
>--tim

Tcl's interpretation shares many aspects with Perl--moreso
with version 8.0 and afterward than before.  There are also
differences in their interpretive schemes.  I'm a tiny bit
surprised to hear Perl's parsing labeled "conventional".
The "strings parsed on the fly ... with ... memoisation ..."
is also an accurate description.  Yes, "everything is a
string" is an important apothegm for Tclers.
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <nkj908uhs3l.fsf@tfeb.org>
······@Starbase.NeoSoft.COM (Cameron Laird) writes:

> Tcl's interpretation shares many aspects with Perl--moreso
> with version 8.0 and afterward than before.  There are also
> differences in their interpretive schemes.  I'm a tiny bit
> surprised to hear Perl's parsing labeled "conventional".


As far as I can tell what perl does is read the source file, parse it
(before executing anything), and then execute the result of the parse.
I think that's reasonably conventional.  It does this horrid magic
with strings & numbers being kind of the same (presumably by some
trick of memoizing the parsed number), but it does not do this for
code fragments (although I think there is an eval function which does
read a string and interpret it as perl, I've never used it, and
certainly it's not fundemental to perl's functioning).

> Yes, "everything is a string" is an important apothegm for Tclers.

It's quite interesting I think where various languages end up.  Tcl
seems to really like this `everything is a string' idea, which I guess
fits quite well with the Unix `all data is streams of bytes' idea.
After messing with some variant of `all data is strings', perl seems
to have arrived at `everything is a hashtable'.  I'm not sure how to
describe where Lisp lives in this parody -- `everything is something,
including program source, but sometimes we restrict things for
efficiency' perhaps?  Java seems to be `most things are something but
some things aren't and program source is not something you should be
thinking about', which is quite close to C++ except there less things
are things.  In C nothing is anything (which is actually quite
consistent and nice, though not as consistent as BCPL which doesn't
even allow you to pretent that things are things in source code).

Or something

--tim
From: William Deakin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl    broken?)
Date: 
Message-ID: <3782070B.CEBE8480@pindar.com>
Tim Bradshaw wrote:

> As far as I can tell what perl does is read the source file, parse it
> (before executing anything), and then execute the result of the parse. I
> think that's reasonably conventional.

A small point if you are interested: before execution the parse tree
byte-code is optimised. Currently this optimisation is quite basic but
there is work afoot to improve this. Also there is a compiler that will
generate byte-code and links into the under-lying c-code, allowing the
code to be excuted immediately, missing out the parse stage.

> It does this horrid magic with strings & numbers being kind of the same
> (presumably by some
> trick of memoizing the parsed number), ....

Hmmm. I would have to disagree with this: strings and numbers are not the
same thing in perl. But perl does alot of conversion based on the
operation and operands. Maybe too much conversion. If you are interested
in the details of what actually goes on a good place to start is chapters
19 and 20 of Advanced Perl Programming by Sriram Srinvasan, O'Reilly &
Associates. This also has some nice stuff to say about eval.

Cheers,

Will
From: Marco Antoniotti
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <lwd7y4bymg.fsf@copernico.parades.rm.cnr.it>
Tim Bradshaw <···@tfeb.org> writes:

> ······@Starbase.NeoSoft.COM (Cameron Laird) writes:
> > Yes, "everything is a string" is an important apothegm for Tclers.
> 
> It's quite interesting I think where various languages end up.  Tcl
> seems to really like this `everything is a string' idea, which I guess
> fits quite well with the Unix `all data is streams of bytes' idea.

It's actually a COBOL idea.  One can argue tha "everything is a string
in COBOL" as well. :)

> After messing with some variant of `all data is strings', perl seems
> to have arrived at `everything is a hashtable'.  I'm not sure how to
> describe where Lisp lives in this parody -- `everything is something,
> including program source, but sometimes we restrict things for
> efficiency' perhaps?  Java seems to be `most things are something but
> some things aren't and program source is not something you should be
> thinking about', which is quite close to C++ except there less things
> are things.  In C nothing is anything (which is actually quite
> consistent and nice, though not as consistent as BCPL which doesn't
> even allow you to pretent that things are things in source code).
> 
> Or something

To be or not to be?  That is the question.... :)

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Christopher B. Browne
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <slrn7o6ipv.p0g.cbbrowne@knuth.brownes.org>
On 07 Jul 1999 11:25:11 +0200, Marco Antoniotti
<·······@copernico.parades.rm.cnr.it> posted:
>Tim Bradshaw <···@tfeb.org> writes:
>> ······@Starbase.NeoSoft.COM (Cameron Laird) writes:
>> > Yes, "everything is a string" is an important apothegm for Tclers.
>> 
>> It's quite interesting I think where various languages end up.  Tcl
>> seems to really like this `everything is a string' idea, which I guess
>> fits quite well with the Unix `all data is streams of bytes' idea.
>
>It's actually a COBOL idea.  One can argue tha "everything is a string
>in COBOL" as well. :)

The UNIX/Tcl connection is more appropriate, as both have the "everything
is a bag of bytes" nature.  COBOL tends to have the "each string is a
*block* of bytes" nature.

>To be or not to be?  That is the question.... :)

Do be do be do...  - Frank Sinatra
Yabba dabba do...  - Fred Flintstone

-- 
Rules of the Evil Overlord #1:  "My legions of terror will have helmets
with clear plexiglass visors, not face-concealing ones."
········@hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Jean-Claude Wippler
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl    broken?)
Date: 
Message-ID: <37820654.25BD9F98@equi4.com>
Cameron Laird wrote:
> 
> In article <···············@lostwithiel.tfeb.org>,
> Tim Bradshaw  <···@tfeb.org> wrote:
[...]
> >Does tcl have a conventional parse-[compile]-execute scheme then, [...]
> 
> Tcl's interpretation shares many aspects with Perl--moreso
> with version 8.0 and afterward than before.  There are also
> differences in their interpretive schemes.  I'm a tiny bit
> surprised to hear Perl's parsing labeled "conventional".
> The "strings parsed on the fly ... with ... memoisation ..."
> is also an accurate description.  Yes, "everything is a
> string" is an important apothegm for Tclers.

The conceptual model which works quite well for me, is that Tcl 8.x does
"incremental parsing and bytecode compilation".  There's no way it could
be compiled 100% up front, since it has the same code <-> data duality
as Lisp (only it's all strings, conceptually).  I have no idea how CL
pulls it off to do real compilation, and find that quite interesting.

Tcl'ers frequently point to the similarity between Tcl and Lisp, and
when dismissing the extreme difference in representation, I must say
that I find some fascinating truth in that.

-- Jean-Claude
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl     broken?)
Date: 
Message-ID: <ey3yagtcram.fsf@lostwithiel.tfeb.org>
* Jean-Claude Wippler wrote:

> The conceptual model which works quite well for me, is that Tcl 8.x does
> "incremental parsing and bytecode compilation".  There's no way it could
> be compiled 100% up front, since it has the same code <-> data duality
> as Lisp (only it's all strings, conceptually).  I have no idea how CL
> pulls it off to do real compilation, and find that quite interesting.

Lisp does *not* have a code-is-data duality!  Lisp has a
*source-code*-is-data duality.  This is one of the classic
misunderstandings about Lisp, not helped by a lot of misleading
information out there.  The fact that Lisp source code is available as
non-opaque data enables things like programs that reason about Lisp
code, such as macros; the fact that Lisp functions are *not* available
as non-opaque data enables efficient compilation.

--tim
From: Jean-Claude Wippler
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is  tcl     broken?)
Date: 
Message-ID: <378292FB.63F4233A@equi4.com>
Tim,
 
> Lisp does *not* have a code-is-data duality!  Lisp has a
> *source-code*-is-data duality.  This is one of the classic
> misunderstandings about Lisp, not helped by a lot of misleading
> information out there.  The fact that Lisp source code is available as
> non-opaque data enables things like programs that reason about Lisp
> code, such as macros; the fact that Lisp functions are *not* available
> as non-opaque data enables efficient compilation.

I don't understand the difference.  

	- source code compiles to machine code
	- source code can be manipulated as data

Is this correct?

If so, it's the same as for Tcl (replace "machine code" by "bytecode").

As I said, I don't understand how the intermediate compile-before-exec
makes a difference.  Conceptually, source ends up running, doesn't it?

By non-opaque data, you mean "in S-expression form", and by opaque data
you mean "whatever bits come out of the compiler", I assume.

Are we not merely separated by differences in terminology?

-- Jean-Claude
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is   tcl     broken?)
Date: 
Message-ID: <ey3vhbxc5w0.fsf@lostwithiel.tfeb.org>
* Jean-Claude Wippler wrote:

> I don't understand the difference.  

> 	- source code compiles to machine code
> 	- source code can be manipulated as data

> Is this correct?

Yes.  So now *I* don't understand why you think this makes compilation
a problem!  The compiler is simply a program which takes source code
and spits out compiled code.  The only difference from something like
a C compiler is that the data structure of the source is prescribed by
the language itself rather than only by the compiler implementation.
This is an enormous difference in terms of what it lets you do of
course, but it makes basically no difference to a compiler that I can
see.

--tim
From: Jean-Claude Wippler
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is    tcl     broken?)
Date: 
Message-ID: <37831054.EA4AF854@equi4.com>
Tim,
 
> >       - source code compiles to machine code
> >       - source code can be manipulated as data
> 
> > Is this correct?
> 
> Yes.  So now *I* don't understand why you think this makes compilation
> a problem!  The compiler is simply a program which takes source code
> and spits out compiled code.  The only difference from something like
> a C compiler is that the data structure of the source is prescribed by
> the language itself rather than only by the compiler implementation.
> This is an enormous difference in terms of what it lets you do of
> course, but it makes basically no difference to a compiler that I can
> see.

Ok, we're in sync.  Let me try to get back to what I *think* this was
about, with apologies for making a huge thread even longer.

C is frozen in the sense that it needs to be pushed through a compiler
before being of any use.  And even then, it is almost incapable of
manipulating programs written in C, let alone itself.  We agree.

Tcl scripts are sort-of-cmpiled-behind-the-scenes, but the aspect which
matters here is that they can manipulate strings, and that the scripts
themselves are strings.  So, like Lisp - if I'm correct - they can do
things like macro-expansion, and assembling new control structures from
existing facilities.  I think we agree on this too.

The one misunderstanding I wanted to clear up, is that Tcl also compiles
(for some years now, it used *not* to).  Invisibly, in a just-in-time
sort of way, and not persistently.  I'm sure we can agree on this.

As William Tanksley pointed out in a private email, Tcl is not perfect.
It occasionally gets tricked into recompiling, sometimes way too often,
and all outside the programmer's control, who has no other option than
to be aware of when Tcl bytecode caching can fail (e.g. Tcl's "eval").

By now you probably think I'm a rabid "Tcl defender".  I'm not, in fact
I'm currently trying to sharpen my Forth and Lisp/Scheme skills again.

"Been there, done that" sounds like a good way to end this thread :)

-- Jean-Claude
From: Kent M Pitman
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is   tcl     broken?)
Date: 
Message-ID: <sfwvhbw5trx.fsf@world.std.com>
Tim Bradshaw <···@tfeb.org> writes:

> The only difference from something like
> a C compiler is that the data structure of the source is prescribed by
> the language itself rather than only by the compiler implementation.

And both the compiler and an interpreter are available at runtime.
That's different.  That means you can create data that is source and
promote it to program dynamically at runtime, which is materially
different regardless of the representation issues.  Also, you can load
compiled code directly from the compiler ("compile to core", as they
say) which is different than C as well.  Also, Lisp defines the
meaning of redefinability enought hat you can redefine running code in
a predictable way.  That's different from C, too.
> 
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is   tcl     broken?)
Date: 
Message-ID: <ey3u2rgckhz.fsf@lostwithiel.tfeb.org>
* Kent M Pitman wrote:
> And both the compiler and an interpreter are available at runtime.
> That's different.  That means you can create data that is source and
> promote it to program dynamically at runtime, which is materially
> different regardless of the representation issues.  Also, you can load
> compiled code directly from the compiler ("compile to core", as they
> say) which is different than C as well.  Also, Lisp defines the
> meaning of redefinability enought hat you can redefine running code in
> a predictable way.  That's different from C, too.

These are of course all true -- what I was trying to say by `there is
no difference' is that `there is no difference which makes compilation
very hard to do', which the grandparent article seemed to be claiming
about Lisp, and which seemed to be similar to the claims that `lisp is
inherently intepreted' that fly around.  I should have worded it more
carefully I guess.

--tim
From: William Tanksley
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is  tcl     broken?)
Date: 
Message-ID: <slrn7o584t.4vr.wtanksle@dolphin.openprojects.net>
On Wed, 07 Jul 1999 01:36:27 +0200, Jean-Claude Wippler wrote:
>Tim,

>> Lisp does *not* have a code-is-data duality!  Lisp has a
>> *source-code*-is-data duality.  This is one of the classic
>> misunderstandings about Lisp, not helped by a lot of misleading
>> information out there.  The fact that Lisp source code is available as
>> non-opaque data enables things like programs that reason about Lisp
>> code, such as macros; the fact that Lisp functions are *not* available
>> as non-opaque data enables efficient compilation.

>I don't understand the difference.  

>	- source code compiles to machine code
>	- source code can be manipulated as data

>Is this correct?

For the languages being discussed, yes.

>If so, it's the same as for Tcl (replace "machine code" by "bytecode").

No, because in Lisp and Forth the compiled code is a black box.  In Tcl
the compiled code has to be treatable as a string in the right situations.
In some cases the treatment as a string results in enormous code
duplication (remember how loops used to work)?

>-- Jean-Claude

-- 
-William "Billy" Tanksley
From: Paul Duffin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is  tcl     broken?)
Date: 
Message-ID: <37847A6D.718F@mailserver.hursley.ibm.com>
William Tanksley wrote:
> 
> On Wed, 07 Jul 1999 01:36:27 +0200, Jean-Claude Wippler wrote:
> >Tim,
> 
> >> Lisp does *not* have a code-is-data duality!  Lisp has a
> >> *source-code*-is-data duality.  This is one of the classic
> >> misunderstandings about Lisp, not helped by a lot of misleading
> >> information out there.  The fact that Lisp source code is available as
> >> non-opaque data enables things like programs that reason about Lisp
> >> code, such as macros; the fact that Lisp functions are *not* available
> >> as non-opaque data enables efficient compilation.
> 
> >I don't understand the difference.
> 
> >       - source code compiles to machine code
> >       - source code can be manipulated as data
> 
> >Is this correct?
> 
> For the languages being discussed, yes.
> 
> >If so, it's the same as for Tcl (replace "machine code" by "bytecode").
> 
> No, because in Lisp and Forth the compiled code is a black box.  In Tcl
> the compiled code has to be treatable as a string in the right situations.
> In some cases the treatment as a string results in enormous code
> duplication (remember how loops used to work)?
> 

Wrong. The compiled code is not treated as a string ever. In fact Tcl
will panic if you ask it to generate a string from a byte code object.

The byte code object does remember however the string which it was
compiled from.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Axel Schairer
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <fm9oghqqc6e.fsf@clair.dai.ed.ac.uk>
Jean-Claude Wippler <···@equi4.com> writes:

> Tim Bradshaw wrote:
> > 
> [Axel Schairer describes strings passed to Tcl commands]
> > 
> 
> In an attempt to prevent mis-information from spreading too far:
> 
> Tcl caches a parsed representation, it works just like Perl and Python.
> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.

Thanks for the correction.  I didn't want to spread misinformation.

Just to prevent misunderstandings: We were talking about the
evaluation rules, and whether they are more or less uniform.  So it
doesn't matter -- I think -- whether the THEN clause of an if is
represented as a string or as something else (although, on rereading
my post, I have to admit that I said `strings' explicitly, so your
criticism is appropriate).  

I just tried to describe why I thought Paul Duffin's point (that Tcl's
evaluation rules are more uniform) had something to it (every operator
receives its arguments unevaluated); and why this uniformity comes at
the expense of the programmer having to request evaluation manually,
which I don't like.

Cheers, 

Axel
From: Cameron Laird
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <7lsr7p$dhr$1@Starbase.NeoSoft.COM>
In article <···············@clair.dai.ed.ac.uk>,
Axel Schairer  <········@dai.ed.ac.uk> wrote:
>Jean-Claude Wippler <···@equi4.com> writes:
>
>> Tim Bradshaw wrote:
>> > 
>> [Axel Schairer describes strings passed to Tcl commands]
>> > 
>> 
>> In an attempt to prevent mis-information from spreading too far:
>> 
>> Tcl caches a parsed representation, it works just like Perl and Python.
>> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.
>
>Thanks for the correction.  I didn't want to spread misinformation.
>
>Just to prevent misunderstandings: We were talking about the
>evaluation rules, and whether they are more or less uniform.  So it
>doesn't matter -- I think -- whether the THEN clause of an if is
>represented as a string or as something else (although, on rereading
>my post, I have to admit that I said `strings' explicitly, so your
>criticism is appropriate).  
Jean-Claude wasn't criticizing.  He was just
supplying correct(ive) information.
>
>I just tried to describe why I thought Paul Duffin's point (that Tcl's
>evaluation rules are more uniform) had something to it (every operator
>receives its arguments unevaluated); and why this uniformity comes at
>the expense of the programmer having to request evaluation manually,
>which I don't like.
Yes.  Tcl's evaluation rules are more uniform.
Evaluation is always and only explicit--except
of course that Tcl does permit definition of
new commands that behave differently.

It is, of course, your privilege not to like
this.  It's quite appropriate to identify this
uniformity as a crucial characteristic of Tcl.
			.
			.
			.
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: Paul Duffin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <37822782.F36@mailserver.hursley.ibm.com>
Cameron Laird wrote:
> 
> In article <···············@clair.dai.ed.ac.uk>,
> Axel Schairer  <········@dai.ed.ac.uk> wrote:
> >Jean-Claude Wippler <···@equi4.com> writes:
> >
> >> Tim Bradshaw wrote:
> >> >
> >> [Axel Schairer describes strings passed to Tcl commands]
> >> >
> >>
> >> In an attempt to prevent mis-information from spreading too far:
> >>
> >> Tcl caches a parsed representation, it works just like Perl and Python.
> >> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.
> >
> >Thanks for the correction.  I didn't want to spread misinformation.
> >
> >Just to prevent misunderstandings: We were talking about the
> >evaluation rules, and whether they are more or less uniform.  So it
> >doesn't matter -- I think -- whether the THEN clause of an if is
> >represented as a string or as something else (although, on rereading
> >my post, I have to admit that I said `strings' explicitly, so your
> >criticism is appropriate).
> Jean-Claude wasn't criticizing.  He was just
> supplying correct(ive) information.
> >
> >I just tried to describe why I thought Paul Duffin's point (that Tcl's
> >evaluation rules are more uniform) had something to it (every operator
> >receives its arguments unevaluated); and why this uniformity comes at
> >the expense of the programmer having to request evaluation manually,
> >which I don't like.
> Yes.  Tcl's evaluation rules are more uniform.
> Evaluation is always and only explicit--except
> of course that Tcl does permit definition of
> new commands that behave differently.
> 

Just to clarify, Tcl does not allow you to change the evaluation rules
of Tcl scripts, however you could write a Tcl command which decided to
interpret its arguments as Lisp for example but then that would be
using Lisp's evaluation rules and not Tcl's.

> It is, of course, your privilege not to like
> this.  It's quite appropriate to identify this
> uniformity as a crucial characteristic of Tcl.
>                         .

It is probably THE most crucial characteristic of Tcl.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Paul Duffin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <37835681.30E4@mailserver.hursley.ibm.com>
Axel Schairer wrote:
> 
> Jean-Claude Wippler <···@equi4.com> writes:
> 
> > Tim Bradshaw wrote:
> > >
> > [Axel Schairer describes strings passed to Tcl commands]
> > >
> >
> > In an attempt to prevent mis-information from spreading too far:
> >
> > Tcl caches a parsed representation, it works just like Perl and Python.
> > At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.
> 
> Thanks for the correction.  I didn't want to spread misinformation.
> 
> Just to prevent misunderstandings: We were talking about the
> evaluation rules, and whether they are more or less uniform.  So it
> doesn't matter -- I think -- whether the THEN clause of an if is
> represented as a string or as something else (although, on rereading
> my post, I have to admit that I said `strings' explicitly, so your
> criticism is appropriate).
> 
> I just tried to describe why I thought Paul Duffin's point (that Tcl's
> evaluation rules are more uniform) had something to it (every operator
> receives its arguments unevaluated); and why this uniformity comes at
> the expense of the programmer having to request evaluation manually,
> which I don't like.
> 

That is not quite true, in the following [bar 12] is evaluated and the
result passed to foo and the value of $var is taken and passed to foo.

	foo [bar 12] $var

In the following
	if {$var} {
	   puts true
	} else {
	   puts false
	}

The stuff inside {} is not evaluated until [if] decides.

Pseudo lisp code would be.

	(if '(var) '(
	    (puts "true")
	) '(
	    (puts "false")
	)

The (if) function would evaluate its first argument as an expression
and then evaluate the true expression or the false expression depending
on the value of that expression.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Marco Antoniotti
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl   broken?)
Date: 
Message-ID: <lwemikbyuc.fsf@copernico.parades.rm.cnr.it>
Jean-Claude Wippler <···@equi4.com> writes:

> Tim Bradshaw wrote:
> > 
> [Axel Schairer describes strings passed to Tcl commands]
> > 
> > Strings?  Seriously?  So it necessarily parses at runtime?  So this
> > is, like, a huge step backwards from something like perl which does
> > actually build a parse tree and execute that.
> 
> In an attempt to prevent mis-information from spreading too far:
> 
> Tcl caches a parsed representation, it works just like Perl and Python.
> At runtime, yes.  Once.  Just like Perl.  Python saves the parsed form.

Common Lisp compiles to native machine code, or interprets a byte-code
internal representation, or interprets source code (i.e. s-exprs, one
of the basic data structures of the language, along with arrays,
classes, structures, hash-tables, etc. etc)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Paul Duffin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <3781DD75.388F@mailserver.hursley.ibm.com>
Tim Bradshaw wrote:
> 
> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> 
> >
> > The point I am making is that NO command in Tcl breaks Tcl's evaluation
> > rules.
> 
> Well, no operator in Lisp breaks Lisp's evaluation rules, it's just
> that they are defined strangely for a small set of operators.  But

So what you are saying is that Lisp's evaluation rules are applied
consistently because the evaluation rules include all of the exceptions.
Just how long are Lisp's evaluation rules ? :)

> [snipped stuff about lazy evaluation, lambda and delay which is far
>  too sophisticated for me]

> Finally, you might have the completely horrible solution of giving
> quoted forms to IF and calling EVAL at runtime, and really sacrificing
> any hope of ever having good performance or anything because you are
> now committed to a fundamentally interpreted language and also to
> runtime variable lookup all over the place.
> 

The above is what Tcl 7.6 and before did. Performance is / was a problem
because the code had to be reparsed every time, however it was not a
"completely horrible solution".

> Is there another way out?
> 

Yes. Tcl 8.0 has exactly the same semantics as Tcl 7.6 but is much 
faster because it now has a byte compiler. The byte code is cached so
code only ever has to be compiled once.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Tim Bradshaw
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <nkjoghqjb2u.fsf@tfeb.org>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> So what you are saying is that Lisp's evaluation rules are applied
> consistently because the evaluation rules include all of the exceptions.
> Just how long are Lisp's evaluation rules ? :)
> 

It depends on the Lisp. I've written interpreters whose basic
evaluator was under a page -- in fact I've written one where the basic
evaluator was only a line or two but that was by slight cheating.
Common Lisp has rather more special operators, but an evaluator still
needn't be that hard.  And CL is a big Lisp.  There is an
evaluator for CL in CL around on the net somewhere.  It's hardly a
challenge to understand the rules, even for CL, and they are fixed.

(note the basic switch is always pretty trivial, it's something like:

	(case (operator-type op)
	  ((special)
            ... do the idiosyncratic things for the known special operators)
           ((macro)
             call macro function on form and restart with resulting form)
           (otherwise
	     evaluate args, call function on args))

)

--tim
From: Jeff Dalton
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <x2aetfouz1.fsf@todday.aiai.ed.ac.uk>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> I am finding this discussion extremely enlightening and I do appreciate
> the explanations that I am receiving. Lisp certainly has changed an
> awful lot since I last played with it. I did think that dynamic scoping
> was such a fundemental part of the language that it could never change.

The situation is actually somewhat complex.

First, it's best to think of Lisp as a family of languages, rather than
a single language.  There can be major differences between different
languages in the Lisp family.

Second, lexical scoping has been used in Lisps for a very long time.
For instance, way back in the 60s, the Lisp 1.5 compiler implemented
lexical scoping.  However, these Lisps typically didn't provide
"closures" (functional objects) over the lexical bindings.  

Probably the simplest way to express this issue is to say that they
didn't properly implement nested function definitions, where
"properly" would include the ability to return an "inner" function
that could still refer to local, lexically scoped, variables declared
in the enclosing function(s).

[For historical reasons, functional objects in Lisp are sometimes
called "funargs" (from "functional arguments"), and there was a
distinction between "upward funargs" that are returned "up" to calling
functions and "downward funargs" that are only passed "down" to called
functions and never returned to a point "higher up" than that at which
they were created.  Downward funargs can be implemented using a stack
to hold local variables, upward funargs cannot.  (I mean ordinary
stacks, not "spaghetti" or heap-allocated ones.)]

So these Lisps had lexically scoped variables, but didn't implement
the fully general case that we've now come to expect and which we
now have in Scheme, Common Lisp, and other recent Lisps.

Moreover, in a number of Lisps, there was both a compiler and an
interpreter, and in some of those Lisps, the interpreter and the
compiler implemented different rules for the treatment of variables.
Typically, in such cases, the interpreter would use dynamic
scope and the compiler would use lexical scope except for
variables declared in some way to be dynamic (and with the
limitation on functional objects mentioned above).

This difference between interpreters and compilers (which obtained
in some, but not all, Lisps) was resolved in Common Lisp in favour
of the "compiler semantics", properly generalised to provide
lexical closures.

Before Common Lisp, Scheme was always already fully lexical
(there's no standard dynamic binding in Scheme, though some
implementations provide ways of doing it).  There was also
an IBM Lisp (I think it was Lisp 370) that had the "compiler
semantics" in both interpreter and compiler but *without*
the generalisation.  (I forget whether it properly handled the
"downward" case or whether it imposed some limitation on what you
were allowed to write.)  By default, InterLisp provided the
"interpreter semantics" in both interpreter and compiler, though
this could be altered via declarations or other instructions
to the system.

There's a lot more that could be said on this subject, but anyway
I think the history makes it possible to see Common Lisp's use of
"full" lexical scoping (with declarations that allow specified
variables to be dynamically scoped) as both a natural development
and a substantial cleanup.

However, the history does raise the question of how people could put
up with such messiness as different variable-binding semantics in the
compiler and interpreter provided by the same implementation of the
same kind of Lisp.

And that in turn raises the question of how people can put up
with language messiness in general.  For instance, I'd have thought
that people would have found tcl's treatment of variables intolerable.
But, evidently, many people do not.

Here are some factors that I think may partly explain this:

  1. Some kinds of messiness are not a problem for humans and may
     actually fit well with how humans actually work.  Think of
     natural languages.  They can have a number of features that
     look like poor design, yet they're clearly things that we
     can handle pretty well.

  2. It helps if programmers understand, or think they understand,
     how the implementation works.  A program is a way to make
     some things happen, and if you can see how to drive the
     implementation to make them happen, it may not matter if
     the mechanisms are a bit messy.

     For instance, tcl's "upvar" looks like a terrible piece of
     language design, and it can lead to horrible "spaghetti"
     interrelationships, but it's clear how it works -- what it does
     in implementational terms -- and that seems to be more important.

     Or think of C and C++, two very successful languages.  C,
     and to a large extent C++, programmers think they know how
     everything in the language works in very machine-like terms,
     and this seems to matter to them: they want to know this and
     are less satisfied by languages that are less "transparent"
     in that respect.

     (And Lisp programmers understood how the interpreters and
     compilers worked and, even when they implemented different
     variable-binding semantics, and hence how to get things to
     happen.)

  3. People derive a pleasurable sense of accomplishment from the
     mastery of tools and techniques, and may acquire valuable
     expertise -- expertise that leads to professional respect
     and to well-paid and readily available jobs.  But this
     process tends to become unpleasant, and is open to fewer
     people, if it involves too high a density of conceptual
     difficulties.  This suggests that language popularity will
     be increased if that density is kept down by the inclusion
     of technical hurdles that are *not* conceptually difficult.

     The idea is that there should be a substantial body of
     expertise that takes time and some effort to acquire (for
     otherwise there's too little sense of accomplishment
     and too little payoff in terms of increased job prospects,
     etc) but that does not impose too many conceptual barriers.

     Language syntax usually has this property, and that may be
     one reason why Lisp's "less syntax" does not do much to help
     Lisp's popularity.  Up to a point, more syntax is better;
     or at least that seems a plausible rule.

     Moreover, if we look at a number of "successful" languages,
     they typically have a number of features, such as syntactic
     variation and complexity, odd technical features such as
     "upvar", nonorthogonalities that limit how various things
     can be used in combination, and large libraries, that
     provide a substantial body of expertise while decreasing
     the density of conceptual difficulties.  Learning the use the
     language effectively requires a fair amount of time without being
     conceptually difficult too often.  Examples: tcl, perl, Java, C,
     C++.

     These languages may not look like great designs, but that
     may actually be an advantage.

-- jd
From: Fernando Mato Mira
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl  broken?)
Date: 
Message-ID: <377C8C66.4FE0FAE7@iname.com>
I had fun with the SET you mentioned before in IQLisp (1987)
AutoLisp is bad, too (used it in 1990)
From: Marco Antoniotti
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <lw6743mknm.fsf@copernico.parades.rm.cnr.it>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> I am finding this discussion extremely enlightening and I do appreciate
> the explanations that I am receiving. Lisp certainly has changed an
> awful lot since I last played with it. I did think that dynamic scoping
> was such a fundemental part of the language that it could never change.

Glad to help.

> > I can only quote Dr. Evil, on this: "you just don't get it!" :)
> > 
> 
> I did actually test the above on Emacs and both worked identically,
> at least in terms of the value returned. Elisp-itis strikes again.
> 
> > The forms are not equivalent.  What we can see is just how pernicious
> > Tcl influence can be.
> > 
> 
> Actually the problem is Elisp-itis.

Granted.  I guess the FUNARG problem was punded into my head so much
that I go on automatic when talking about these issues.

> From the dragon book the above is an example of 
> 	"Lexical Scope with Nested Procedures"
> whereas Tcl has
> 	"Lexical Scope without Nested Procedures".
> 
> The index says for "Static Scope" see "Lexical Scope" so I am taking
> them to be the same.

It goes beyond that.  Lisp's have "first class" function objects.
Lexical scoping works in tandem with them.

> Lexical scope is therefore language dependent and different languages
> implement different rules. So Tcl implements Tcl lexical scope which
> is not the same as Lisp lexical scope.

Tcl has basically only a 'local' plus a 'global' (or namespaces based)
scoping.  This makes it necessary to introduce [upvar], [uplevel] and
friends to do interesting things, albeit in a very convoluted way.

> > (defmacro setq (symbol value) ; Let's forget about redefinitions of SETQ etc.
> >   (list 'set (list 'quote symbol) value))
> > 
> > What is that you do not understand?
> > 
> 
> I could be suffering from elisp-itis here again as I thought that macros had
> funny uses of @ symbols. That looks remarkably like building up a
> Tcl command to be [eval]ed.

The above example could have been rewritten as

  (defmacro setq (symbol value) ; Let's forget about redefinitions of SETQ etc.
     `(set ',symbol ,value))

This is because Common Lisp allows you to define 'macro characters'
which are interpreted by the READER.  In the above, the'backquote (or
QUASIQUOTE) instructs the reader to treat the comma and the ·@' in a
special way. I.e. it tells the reader to expand these characters in
special Lisp forms.  Note that the QUOTE special form is treated in
the same way.

	's-expr

is equivalent to

	(QUOTE s-expr)

Backquote is tricky to master but it is just a useful shorthand.
It is true that this is equivalent to building up a Tcl command to be
[eval]ed.  But Lisp makes a much better use of these constructs and
does need any call to EVAL to execute macros.  Moreover, at compile
time, these macros effectively are transformed into inlined code.

> > The semantics of macros is just that of replacement.  Nothing more.
> > The data structures manipulated within macros are all the standard
> > Lisp ones.
> > 
> 
> True.
> 
> Take the following
> 
> 	(foo fred 1)
> 
> If I implement foo as a function then fred is a variable and foo gets a 'copy' of
> its value.
> 
> If I implement it as a macro then fred can have any meaning.
> 
> The only difference between Lisp and Tcl in this regard is that Lisp
> needs a macro 
> to do its stuff and Tcl needs [upvar] and [eval].

I do not understand the example you give.

> > > Tcl does not need a macro system because it has a simple consistent
> > > syntax / semantics and all commands are treated equally and it is
> > > possible to change the behaviour of existing commands.
> > 
> > What of these properties does not hold in CL without macros.  Which
> > one does not hold if you throw macros in the midst?
> > 
> 
> The fact that not all 'commands' are treated equally. You have
> macros and you have 
> functions and they are treated differently which can be a source of
> some confusion.

I do not understand your insistence on this point. Of course they are
treated differently.  They serve different purposes.  I believe that
we are running around in circles.

> > Which means?  What will the value of
> > 
> >    lambda {x} {lambda {y} {expr $x + $y}}
> > 
> > be? (you can make your substitution of {} with []; it is just another
> > confusing aspect of Tcl.
> > 
> 
> In the implementation I have the above would fail with an error because Tcl's
> scoping rules are different to Lisp.
> However the following would do the same thing.
> 
> 	lambda {x} {curry [lambda {x y} {expr {$x + $y}}] $x}
> 
> > And, once you have that (assuming the semantics were the right one),
> 
> The semantics are right (same value) but the syntax is different.

The semantics is rather different.  I'd like to see the code of
[curry] and of [lambda].  Suppose you do

   set f [lambda {x} {curry [lambda {x y} {expr {$x + $y}}] $x}]

what value would you have in 'f'?

Note that

  (defun curry (f &rest values)
     (lambda (&rest args)
	(apply f (append values args))))

And that your example is already available in CL as

  (lambda (x) (curry (lambda (x y) (+ x y)) x))

Now, if you can do [curry] in any reasonable way, then you should be
able to do the original example (without [curry]) directly.  But,
AFAIK [lambda] is not in Tcl yet.

> 
> > what will Tcl have added to Common Lisp of 1984.
> > 
> 
> Less () and more {}[] :)
> 
> Tcl does not add to Common Lisp, Tcl is Tcl.

Exactly the reason why it is better to use something that does the
right thing. :)

> > > I would say that Tcl (without macro system) can do anything that Lisp
> > > (with macro system) can do.
> > 
> > No it can't. With or without the Macro System. The above example I
> > wrote is just a little tiny bit.  As a proof of your argument you
> 
> Countered.

Somewhat.

	...

> > > > > Lisp and Tcl have a lot more in common than Lispers seem to want to
> > > > > acknowledge.
> >
> > Well, the story goes like this.  The author of Tcl writes up a lengthy
> > white paper on "scripting" and interpreted languages with the intent
> > of explaining his choices and the structure of Tcl.  He does not
> > mention the L-word once.  Public uproar from us 20 zealots around the
> > world makes him revise the paper by including the L-word.  End of
> > story.
> > 
> 
> Alright I will rephrase.
> 
> 	Lisp and Tcl have a lot more in common then SOME Lispers AND TCLERS
> 	seem to want to acknowledge.

Given the Fundamental Theorem on the Evolution of Programming
Languages it should not come as a surprise :)

> > Let's not kid ourselves.  The only reason why Tcl got any attention
> > comes from the Tk part.
> 
> Tcl does get a lot of introductions through Tk and Expect just as Lisp
> gets (got) a lot of introductions (albeit badly warped :-) ) from Emacs.
> However Tcl is more than Tk, just as Lisp is more than Emacs.
> 
> Just to reiterate Lisp (except elisp) has come a long way since 
> "dynamic scoping".

Therefore go to www.cons.org and download CLISP (or better, port CMUCL
to AIX and the PowerPC :) )

> This reduces my comments to subjectivity which you should probably
> ignore.

Nahhh.  Flaming is just like wine.  It is good for you if taken in
moderate amounts :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Paul Duffin
Subject: Re: Lisp is ok except for Elisp which is 'broken' (was Re: Why is tcl broken?)
Date: 
Message-ID: <377CD581.19A1@mailserver.hursley.ibm.com>
Marco Antoniotti wrote:
> 
> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> 
> Granted.  I guess the FUNARG problem was punded into my head so much
> that I go on automatic when talking about these issues.
> 
> > From the dragon book the above is an example of
> >       "Lexical Scope with Nested Procedures"
> > whereas Tcl has
> >       "Lexical Scope without Nested Procedures".
> >
> > The index says for "Static Scope" see "Lexical Scope" so I am taking
> > them to be the same.
> 
> It goes beyond that.  Lisp's have "first class" function objects.
> Lexical scoping works in tandem with them.
> 

Agreed.

> > Lexical scope is therefore language dependent and different languages
> > implement different rules. So Tcl implements Tcl lexical scope which
> > is not the same as Lisp lexical scope.
> 
> Tcl has basically only a 'local' plus a 'global' (or namespaces based)
> scoping.  This makes it necessary to introduce [upvar], [uplevel] and
> friends to do interesting things, albeit in a very convoluted way.
> 

True, but it is still statically scoped.

> >
> > I could be suffering from elisp-itis here again as I thought that macros had
> > funny uses of @ symbols. That looks remarkably like building up a
> > Tcl command to be [eval]ed.
> 
> The above example could have been rewritten as
> 
>   (defmacro setq (symbol value) ; Let's forget about redefinitions of SETQ etc.
>      `(set ',symbol ,value))
> 
> This is because Common Lisp allows you to define 'macro characters'
> which are interpreted by the READER.  In the above, the'backquote (or
> QUASIQUOTE) instructs the reader to treat the comma and the ·@' in a
> special way. I.e. it tells the reader to expand these characters in
> special Lisp forms.  Note that the QUOTE special form is treated in
> the same way.
> 
>         's-expr
> 
> is equivalent to
> 
>         (QUOTE s-expr)
> 

BLEUUUGHHHH, you have hit the nail on the head.

> Backquote is tricky to master but it is just a useful shorthand.
> It is true that this is equivalent to building up a Tcl command to be
> [eval]ed.  But Lisp makes a much better use of these constructs and
> does need any call to EVAL to execute macros.  Moreover, at compile
> time, these macros effectively are transformed into inlined code.
> 

This can be done in Tcl by overriding proc and defining some substition
process on the body.

> > > The semantics of macros is just that of replacement.  Nothing more.
> > > The data structures manipulated within macros are all the standard
> > > Lisp ones.
> > >
> >
> > True.
> >
> > Take the following
> >
> >       (foo fred 1)
> >
> > If I implement foo as a function then fred is a variable and foo gets a 'copy' of
> > its value.
> >
> > If I implement it as a macro then fred can have any meaning.
> >
> > The only difference between Lisp and Tcl in this regard is that Lisp
> > needs a macro
> > to do its stuff and Tcl needs [upvar] and [eval].
> 
> I do not understand the example you give.
> 

Ok, forget it, it is not important.

> >
> > The fact that not all 'commands' are treated equally. You have
> > macros and you have
> > functions and they are treated differently which can be a source of
> > some confusion.
> 
> I do not understand your insistence on this point. Of course they are
> treated differently.  They serve different purposes.  I believe that
> we are running around in circles.
> 

Almost certainly :)

> > > Which means?  What will the value of
> > >
> > >    lambda {x} {lambda {y} {expr $x + $y}}
> > >
> > > be? (you can make your substitution of {} with []; it is just another
> > > confusing aspect of Tcl.
> > >
> >
> > In the implementation I have the above would fail with an error because Tcl's
> > scoping rules are different to Lisp.
> > However the following would do the same thing.
> >
> >       lambda {x} {curry [lambda {x y} {expr {$x + $y}}] $x}
> >
> > > And, once you have that (assuming the semantics were the right one),
> >
> > The semantics are right (same value) but the syntax is different.
> 
> The semantics is rather different.  I'd like to see the code of
> [curry] and of [lambda].  Suppose you do
> 
>    set f [lambda {x} {curry [lambda {x y} {expr {$x + $y}}] $x}]
> 
> what value would you have in 'f'?
> 

An opaque reference to the lambda object which you can use as follows.

% set g [$f 12]
% $g 14
26

> Note that
> 
>   (defun curry (f &rest values)
>      (lambda (&rest args)
>         (apply f (append values args))))
> 
> And that your example is already available in CL as
> 
>   (lambda (x) (curry (lambda (x y) (+ x y)) x))
> 
> Now, if you can do [curry] in any reasonable way, then you should be
> able to do the original example (without [curry]) directly.  But,
> AFAIK [lambda] is not in Tcl yet.
> 

At the moment it is in an extension of mine but in the future it will
probably be in Tcl.

> >
> > > > I would say that Tcl (without macro system) can do anything that Lisp
> > > > (with macro system) can do.
> > >
> > > No it can't. With or without the Macro System. The above example I
> > > wrote is just a little tiny bit.  As a proof of your argument you
> >
> > Countered.
> 
> Somewhat.
> 

Especially as I cheated in this case by using a Tcl extension. Although
it is possible to do it in pure Tcl.

>         ...
> >       Lisp and Tcl have a lot more in common then SOME Lispers AND TCLERS
> >       seem to want to acknowledge.
> 
> Given the Fundamental Theorem on the Evolution of Programming
> Languages it should not come as a surprise :)
> 

I guess not.

> > > Let's not kid ourselves.  The only reason why Tcl got any attention
> > > comes from the Tk part.
> >
> > Tcl does get a lot of introductions through Tk and Expect just as Lisp
> > gets (got) a lot of introductions (albeit badly warped :-) ) from Emacs.
> > However Tcl is more than Tk, just as Lisp is more than Emacs.
> >
> > Just to reiterate Lisp (except elisp) has come a long way since
> > "dynamic scoping".
> 
> Therefore go to www.cons.org and download CLISP (or better, port CMUCL
> to AIX and the PowerPC :) )
> 
> > This reduces my comments to subjectivity which you should probably
> > ignore.
> 
> Nahhh.  Flaming is just like wine.  It is good for you if taken in
> moderate amounts :)
> 

In this case the flames warmed up a bit of my brain which had frozen
for lack of use !!!!

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Tim Bradshaw
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <nkjg138jzm1.fsf@tfeb.org>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:


> Mistake, I think I meant dynamic scoping.

(Common) Lisp doesn't have dynamic scoping either (by default, there
are things called `special variables' which are dynamically scoped.
Up to multithreading considerations you can implement them in terms of
the normal lexical variables + globals.)

> 
> Look at the following code.
> 
> 	(let ((a 2))
> 	  (let ((b 3))
> 	    (+ a b)))
> 
> When you expand the macro let you get something like.
> 
> 	((lambda (a)
> 	   ((lambda (b) 
> 	      (+ a b)) 3)) 2)
> 
> Which is equivalent (apart from side effects) of.
> 
> 	(defun inner (b)
> 	  (+ a b))
> 
> 	(defun outer (a)
> 	  (inner 3))
> 
> 	(outer 2)
> 

Oh no, it's not equivalent to that at all, regardless of side effects,
names &c.  Assuming there is no special declaration in effect for A,
this code will cause a runtime error, and (probably) a compiletime
warning.

> So what is the process which associates the use of "a" inside inner
> with the formal argument of outer. I 
> 

There is no process, but the two code fragments you give are not
equivalent.

> Lisp macros manipulate S-expressions but the macro definitions themselves
> are interpreted differently to Lisp expressions.
> 
> A simple macro version of setq would convert from
> 	(setq symbol '(list))
> to
> 	(set 'symbol '(list))
> 

That is certainly not a legal transformation.  In fact there is no
legal transformation for SETQ, it is necessarily defined as a
primitive in the language.

> A macro setq is not interpreted the same way as set is because if it
> was an error would occur when the Lisp interpreter tried to get the
> value of the variable symbol before symbol was created.

I think you're seriously confused here.  Lisp macros perform
transformations of source code represented as Lisp data.  That is
*all* they do -- they have no special magic access to the state of the
system which would *allow* them to do any inconsistent thing.  Any
`inconsistencies' they introduce can only correspond to inconsistent
definition of the macro.

In the particular case above you seem to be confused about SETQ (which
is a special operator, and thus can *not* be defined by a macro in the
language), and SET which is a (deprecated) function.  These do quite
different things.

Special operators certainly do behave in idiosyncratic ways, that's
why they are called `special'.  Perhaps what you are claiming is that
TCL has no special operators?
 
Without meaning to be rude, it looks to me like your knowledge of Lisp
is rather old.  Some of the things you claim would have been
more-or-less true for very old dialects of Lisp (and some are probably
kind of true for one very old dialect still in use -- elisp). It would
perhaps be a good idea to get a recent book on Lisp or scheme to see
what things are like nowadays.

--tim
From: Jeff Dalton
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <x2btdvp2au.fsf@todday.aiai.ed.ac.uk>
Tim Bradshaw <···@tfeb.org> writes:

> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> > A macro setq is not interpreted the same way as set is because if it
> > was an error would occur when the Lisp interpreter tried to get the
> > value of the variable symbol before symbol was created.

> I think you're seriously confused here.  Lisp macros perform
> transformations of source code represented as Lisp data.  That is
> *all* they do -- they have no special magic access to the state of the
> system which would *allow* them to do any inconsistent thing.  [...]

There's also the confusion illustrated by "before the symbol
was created".
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <87aetgfqww.fsf@orion.dent.isdn.cs.tu-berlin.de>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> If they are statically scoped how does the following work.
> 
> Look at the following code.
> 
> 	(let ((a 2))
> 	  (let ((b 3))
> 	    (+ a b)))
> 
> When you expand the macro let you get something like.
> 
> 	((lambda (a)
> 	   ((lambda (b) 
> 	      (+ a b)) 3)) 2)
> 
> Which is equivalent (apart from side effects) of.
> 
> 	(defun inner (b)
> 	  (+ a b))
> 
> 	(defun outer (a)
> 	  (inner 3))
> 
> 	(outer 2)

They are _NOT_ equivalent, because the reference to a in inner is now
completely free, which it was not in the lambda formulation.  When you
change the nesting of functions/forms like this, you of course change
their meaning (this is basic Lambda-calculus).

> So what is the process which associates the use of "a" inside inner
> with the formal argument of outer. I 

In lexical scoping there is no such mechanism, since a variable has to 
be lexically apparent, for it to be accessible.  Since a is not
lexically apparent in inner, it is not accessible.  That is one of the 
important features of lexical scoping, since it ensures that I only
have to look at the lexically enclosing environment to understand
which variables are accessed.

Common Lisp allows dynamic scoping for certain, specially declared,
variables.  In CL you _could_ write the following as a somewhat
equivalent form of your TCL code below, but no Lisp programmer would
do it that way:

(defun inner (b)
  ;; Note that a is a free reference in here, so we rely on there
  ;; being a dynamic binding in effect for a when we are called.
  ;; This is clearly insane ;)
  (+ a b))

(defun outer (a)
  (declare (special a))
  (inner 3))

(outer 2)

=> 5

If you really wanted to do something this way in CL, then a would be
considered a global parameter to the behaviour of inner, and would be
declared prominently as such.  If that isn't the case (as in your
example, where arguably a is just a parameter of inner that you don't
care to pass explicitly), you would simply use closures/nested
functions as appropriate, or indeed pass the argument around, if you
want inner to be self-sufficient.  I.e. either do

a)

(defun outer (a)
  (flet ((inner (b) (+ a b)))
    (inner 3)))

b) 

(defun inner (a b)
  (+ a b))

(defun outer (a)
  (inner a 3))

or 

c)

(defvar *inner-frobnication-parameter* 0
  "This global parameter allows you to fine tune the frobnicator of
inner in a global fashion.  Please use with appropriate care, since
too high value here might result in the sudden destruction of the
whole of creation, which some would consider to be enormous lossage.")

(defun inner (b)
  "Frobnicates b via a general purpose frobnication-algorithm.  See
also *inner-frobnication-parameter*, which gives the total bias for
the algorithm.  See Knuth, Vol 1, P. 169 for a specification of the
algorithm and the parameters involved."
  (+ *inner-frobnication-parameter* b))

(defun outer (a)
  "Frobnicates 3 via inner, with a as the setting for the
*inner-frobnication-parameter*."
  (let ((*inner-frobnication-parameter* a))
    (inner 3)))

> The Tcl equivalent is
> 
> 	proc inner {b} {
> 	    upvar 1 a a;
> 	    expr {$a + $b}
> 	}
> 
> 	proc outer {a} {
> 	    inner 3
> 	}
> 
> 	outer 2

Even though I'd shoot every programmer that wrote the Lisp code I
first wrote above, it still seems to me to be a much better construct
than the corresponding upvar in Tcl:

a)  With Tcl it seems to me that the caller has no way of knowing
    which of the procedures he calls might simply start poking at his
    variables.  Worse:  Since upvar can be done for any n levels, he
    will have to inspect _all_ of the procedures ever called while he
    is active, to ensure that they are not using any of his internal
    variables.

    This to me seems clearly insane:  If I were maintaining the Tcl
    code above, and at some point decided that a was a misnomer and
    should be called first-of-two (or whatever else), I'd have to look 
    at all procedures ever called to ensure none was relying on my
    having a variable called a.  In Lisp the special declaration would 
    at least warn me about the fact that someone somewhere was relying 
    on me dynamically binding a.

b) The fact that upvar has to specify at which level the
   corresponding variable is to be found, makes apparent it's hackish
   nature.  Inner shouldn't have to know who exactly will provide it
   with the value for a.  In Tcl I can't call inner via a mediating
   function, like this:

   proc master {a} {
     middle 3
   }

   proc middle {c} {
     inner c
   }

   For this to work, I'd either have to modify inner (in which case
   the old call-chain via outer would cease to work), or I'd have to
   modify middle, to ensure middle passes along a.  Now imagine master 
   either calling inner directly or via middle (depending on some
   condition), and you get into serious trouble understanding and
   maintaining the various relationships and variable bindings
   involved.  This seems to me to take the problems of dynamic binding 
   to new levels (i.e. it somehow reminds me of assembler code poking 
   around in a caller's stack frame, to avoid copying values into new
   stack frames).

> As I have said before it is simply an explicit form of the Lisp
> mechanism which allows the let macro to be implemented using
> lambda. It is very rare that N is more than 1.

To repeat it: dynamic scoping is in no way necessary or even helpful
for implementing let via lambda.  LET is directly expressible via
LAMBDA in the lambda-calculus, which is lexically scoped.  The same
goes for Scheme, which doesn't have dynamic scoping either.  What you
are doing is performing invalid transformations on lambda-calculus
expressions, going on to claim that since you need dynamic scoping to
make those transformations work, that dynamic scoping is needed to
allow LET to be expressed in LAMBDA.  Had you any knowledge of Lambda
calculus, you wouldn't make these ridiculous claims.  BTW: The invalid
transformations you perform are common-place in Tcl, because Tcl (like
C) doesn't allow nested functions.

> > You are very mistaken.  Common Lisp macros are definitively not
> > another language.  They manipulate S-expressions, which are what Lisp
> 
> Lisp macros manipulate S-expressions but the macro definitions themselves
> are interpreted differently to Lisp expressions.

The macro-definitions are interpreted exactly in the same way as any
other Lisp expression, i.e. in

(defmacro blabla (a b)
  (my-expression))

(my-expression) is interpreted in exactly the same way as
(my-expression) would in any other context (with appropriate bindings
in place of course).

> A simple macro version of setq would convert from
> 	(setq symbol '(list))
> to
> 	(set 'symbol '(list))

It wouldn't do this, since set can only access "global variables",
whereas setq can access local variables.  Take note please that set
is deprecated, and that setq is not a macro, but a special form.  If
this doesn't mean anything to you, then please refer to the ANSI
Common Lisp standard (also available in a hypertext version, at
http://www.harlequin.com/books/HyperSpec), which would benefit this
discussion greatly.

> A macro setq is not interpreted the same way as set is because if it
> was an error would occur when the Lisp interpreter tried to get the
> value of the variable symbol before symbol was created.

This doesn't make any sense.  Please get your vocabulary right.  The
symbols in question are created at read-time, so there is no way that
either of the above forms could be executed in an environment where
the symbol in question didn't exist, unless you did some very evil
hacking with the package system, the consequences of which I wouldn't
claim to be well-defined.

Now that I've re-read your sentence for the xth time, it seems to me
that you are claiming that (setq symbol 5) is interpreted differently
than (set symbol 5), which is of course right.  That was the whole
point of defining a macro for setq, that transformed (setq symbol 5)
into (set 'symbol 5) and not into (set symbol 5).

But this is not inconsistent in my book, because that is exactly what
you asked for: You _wanted_ every (setq a b) form to be interpreted
as if (set 'a b) had been written.  So after macro-expansion, the
evaluation of (set 'a b) is totally consistent with the evaluation of
any other set form.

If you find this inconsistency troublesome, than you have to condemn
every kind of macro system, and Tcl, too, which allows the user to
change the behaviour of commands.

> I don't underestimate the power of Lisp macros I am just saying that I
> find the way they warp the otherwise simple Lisp syntax / semantics.
> 
> Lisp does not really need macros, they are really just a way of
> creating short cuts.

Firstly this sounds like the old argument against HLLs, i.e. we don't
need high-level languages, they are really just a way of creating
short cuts, which though true, totally misses the point of HLLs.

Secondly, since special forms (of which setq as well as quote and
lambda are prime examples) also warp the otherwise simple evaluation
rules of Lisp, it can be argued that Lisp effectively needs at least a
certain amount of warping, to be usable at all.  Even the pure lambda
calculus is warped in your eyes then.  Funny how all the warpiness of
Tcl doesn't grate on your nerves...

> Tcl does not need a macro system because it has a simple consistent
> syntax / semantics and all commands are treated equally and it is
> possible to change the behaviour of existing commands.

This is clearly no argument at all, because all the purported features
of Tcl you mention are clearly orthogonal to the existence of a macro
system.

> I would say that Tcl (without macro system) can do anything that Lisp
> (with macro system) can do.

Assembler can do anything either Lisp or Tcl can do.  This is no
argument against either Lisp or Tcl.  At the point where programming
language discussions turn to this Turing-machine-equivalence argument, 
it is clear that the discussion is fruitless.

end-of-line.

Regs, Pierre.

PS:  Should any further discussion seems sensical (it doesn't to me),
please heed the F'up to c.l.l, since neither Python nor Scheme are at
all relevant to this.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Andy Freeman
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7lg4dd$hs5$1@nnrp1.deja.com>
In article <·············@mailserver.hursley.ibm.com>,
  Paul Duffin <·······@mailserver.hursley.ibm.com> wrote:

> > Dynamic binding is another beast altogether, which Common Lisp and
> > Scheme (statically scoped languages) allow in through a back door.
>
> If they are statically scoped how does the following work.

All of the following book references are to Steele's Common
Lisp the Language, 2nd edition.

See Chapter 3 (esp pg 45, which explicitly states that the
default scope is static), Chapter 1.2.3 (discussion of expansion),
and Chapter 8 (esp 8.1).  Chapter 3 has a nice description
of what scope, extent, static, and dynamic mean.

> Look at the following code.
>
> 	(let ((a 2))
> 	  (let ((b 3))
> 	    (+ a b)))
>
> When you expand the macro let you get something like.
>
> 	((lambda (a)
> 	   ((lambda (b)
> 	      (+ a b)) 3)) 2)

So far, so good.  However, it doesn't really go further
because lambda and application are primitives.

> Which is equivalent (apart from side effects) of.
>
> 	(defun inner (b)
> 	  (+ a b))
>
> 	(defun outer (a)
> 	  (inner 3))
>
> 	(outer 2)

No, it isn't.  That's not what lambda means, and it isn't how
macros work, and so on.  Lambda "makes" an UNNAMED function IN
a given lexical context - the above is an invalid transformation.

One can use lambda to create CL variables with dynamic scope (that
is indefinite scope and dynamic extent), but nested lambdas don't
work as above.

It isn't clear that nested lambdas EVER worked that way, even in
dynamically scoped lisps.

> So what is the process which associates the use of "a" inside inner
> with the formal argument of outer.

mu.  Since there is no "inner" or "outer", the question makes
no sense.

> Lisp macros manipulate S-expressions but the macro definitions
themselves
> are interpreted differently to Lisp expressions.

Macro definitions just define source to source transformations
that happen "before" before evaluation.

> A simple macro version of setq would convert from
> 	(setq symbol '(list))
> to
> 	(set 'symbol '(list))

Setq can not be defined in terms of set because set can't work for
statically scoped vars, which is the default in CL.

> Lisp does not really need macros, they are really just a way of
> creating short cuts.

I don't need * or even + either (if I have +1 and -1), but .....
(In fact, I can even get by with an "if" which always evaluates
all of its arguments.)

> I would say that Tcl (without macro system) can do anything that Lisp
> (with macro system) can do.

Since both are Turing machines....

-andy


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
From: Cameron Laird
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7liguf$58o$1@Starbase.NeoSoft.COM>
In article <·············@mailserver.hursley.ibm.com>,
Paul Duffin  <·······@mailserver.hursley.ibm.com> wrote:
>Marco Antoniotti wrote:
			.
		[lots of interesting stuff]
			.
			.
>Tcl does not need a macro system because it has a simple consistent
>syntax / semantics and all commands are treated equally and it is
>possible to change the behaviour of existing commands.
>
>> >
>> > If I needed a macro language in Tcl I can just write one.
>> 
>> I just recieved an email from Cameron Laird, citing a 'procm' form in
>> Tcl, which is supposed to do 'scan time' macros - probably something
>> in line with the real thing. However, the manual pages for 8.1 at Scriptics
>> does not mention it.  Yet, I suppose that it is still an experimental
>> feature that maybe will appear in Tcl in a later edition - a few
>> lustres after Lisp had them :)
>> 
>
>I would say that that is probably something Cameron has created. Despite
>what you think Tcl is not crying out for a Lisp like macro system. What
>it is crying out is for more data types and it will soon be getting them.
>Including a 'proper' [lambda] implementation.
>
>I would say that Tcl (without macro system) can do anything that Lisp
>(with macro system) can do.
			.
			.
			.
I've tweaked follow-ups to move this out of
the way of Pythoneers.

Paul's right.  I apologize for not writing
more clearly before.  I'll summarize:  Marco
wrote, "Here's the kind of thing that Tcl's
[proc] doesn't do as well as LISP's macros
...", I replied, "No problem--we'll just de-
fine [procm] ..." and left the details as an
exercise to the reader.  No, there is NOT a
standard [procm], and I hadn't written one
explicitly at the time I followed up.

That in itself is interesting.  The point I
was trying to make is that Tclers find it
very natural to reason, "Hmm, I don't have
the control structure I want, so I'll quickly
create it."  This is different from, for ex-
ample, Python.  I'll characterize Python's
attitude as more like, "Hmm, Guido doesn't
provide the control structure I want; I must
be using the wrong object pattern for my
algorithm, so it's time to redesign."

I'll summarize:  the real content of my mes-
sage was that Tclers are generally rather
indifferent to reservations about syntax,
because any disagreement quickly is absorbed
into the, "well, if that's what you want,
let's just write a proc that interprets it
that way" state.  LISPish macro processing
isn't present in Tcl, but anyone who wants
it can have it easily enough, so the Tcl
mentality simply doesn't worry much about it.
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: Donal K. Fellows
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <m0JTEFA0DTf3EwLV@ugglan.demon.co.uk>
In article <············@Starbase.NeoSoft.COM>, Cameron Laird
<······@Starbase.NeoSoft.COM> writes
>I'll summarize:  the real content of my mes-
>sage was that Tclers are generally rather
>indifferent to reservations about syntax,
>because any disagreement quickly is absorbed
>into the, "well, if that's what you want,
>let's just write a proc that interprets it
>that way" state.  LISPish macro processing
>isn't present in Tcl, but anyone who wants
>it can have it easily enough, so the Tcl
>mentality simply doesn't worry much about it.

I think that is probably some of the wisest stuff I've read on c.l.t for
a long time, and there have been a lot of good posts recently.  Tclers
(especially the experienced ones that frequent the group) tend to
recognise the difference between the substance of semantics and the
clothing of syntax - I suspect having a language with extremely little
in the way of inherent syntax itself helps a lot in reaching this
position...

And the lack of at least one layer of braces makes our scripts more
readable than Lisp too!  :^P

Donal.
-- 
Donal K. Fellows (at home)
--
FOOLED you!  Absorb EGO SHATTERING impulse rays, polyester poltroon!!
From: William Tanksley
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <slrn7nss0n.5u0.wtanksle@dolphin.openprojects.net>
On Fri, 2 Jul 1999 22:36:52 +0100, Donal K. Fellows wrote:

>I think that is probably some of the wisest stuff I've read on c.l.t for
>a long time, and there have been a lot of good posts recently.  Tclers
>(especially the experienced ones that frequent the group) tend to
>recognise the difference between the substance of semantics and the
>clothing of syntax -

Tcl is a nice one for this, although I don't care for the inflexibility of
its syntax.

>I suspect having a language with extremely little
>in the way of inherent syntax itself helps a lot in reaching this
>position...

Yes, this is true.  There are, however, languages with far less than Tcl.

>And the lack of at least one layer of braces makes our scripts more
>readable than Lisp too!  :^P

This is often cited as a problem with Lisp.  As the Lisp people have often
pointed out to me, it's not a bug; it's the inevitable result of a
feature.  The feature is that everything is an expression.

Reminds me of the people who keep assuming that Python's indentation is
some kind of bug :-).

>Donal.

-- 
-William "Billy" Tanksley
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <378077E1.5E65@mailserver.hursley.ibm.com>
William Tanksley wrote:
> 
> On Fri, 2 Jul 1999 22:36:52 +0100, Donal K. Fellows wrote:
> 
> >I think that is probably some of the wisest stuff I've read on c.l.t for
> >a long time, and there have been a lot of good posts recently.  Tclers
> >(especially the experienced ones that frequent the group) tend to
> >recognise the difference between the substance of semantics and the
> >clothing of syntax -
> 
> Tcl is a nice one for this, although I don't care for the inflexibility of
> its syntax.
> 

In what respect ?

> >I suspect having a language with extremely little
> >in the way of inherent syntax itself helps a lot in reaching this
> >position...
> 
> Yes, this is true.  There are, however, languages with far less than Tcl.
> 
> >And the lack of at least one layer of braces makes our scripts more
> >readable than Lisp too!  :^P
> 
> This is often cited as a problem with Lisp.  As the Lisp people have often
> pointed out to me, it's not a bug; it's the inevitable result of a
> feature.  The feature is that everything is an expression.
> 

Tcl quotes its outermost statements with {} newlines and semicolons, 
Lisp quotes them with ().

> Reminds me of the people who keep assuming that Python's indentation is
> some kind of bug :-).
> 

Or that Tcl's comment is a bug.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: William Tanksley
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <slrn7o58l7.4vr.wtanksle@dolphin.openprojects.net>
On Mon, 05 Jul 1999 10:16:17 +0100, Paul Duffin wrote:
>William Tanksley wrote:
>> On Fri, 2 Jul 1999 22:36:52 +0100, Donal K. Fellows wrote:

>> >I think that is probably some of the wisest stuff I've read on c.l.t for
>> >a long time, and there have been a lot of good posts recently.  Tclers
>> >(especially the experienced ones that frequent the group) tend to
>> >recognise the difference between the substance of semantics and the
>> >clothing of syntax -

>> Tcl is a nice one for this, although I don't care for the inflexibility of
>> its syntax.

>In what respect ?

You can add new words, but they have to sit within Tcl's syntax.  The
arguments they take have to fit.  Even the arguments of their arguments
have to fit.  And so on.  It makes even simple, obvious things like
comments odd.

Lisp and Scheme have a much simpler syntax, so redefining syntax is easy
if it's ever wanted.  Forth has no syntax, so it's trivial.

>> >And the lack of at least one layer of braces makes our scripts more
>> >readable than Lisp too!  :^P

>> This is often cited as a problem with Lisp.  As the Lisp people have often
>> pointed out to me, it's not a bug; it's the inevitable result of a
>> feature.  The feature is that everything is an expression.

>Tcl quotes its outermost statements with {} newlines and semicolons, 
>Lisp quotes them with ().

Lisp doesn't have the concept of a statement.  Everything's in parenthesis
because everything's an expression.  No eval.

>> Reminds me of the people who keep assuming that Python's indentation is
>> some kind of bug :-).

>Or that Tcl's comment is a bug.

Exactly.  It's the result of a deliberate design decision, with benefits
and drawbacks of its own.  We want to study the design decision, not a
single side effect.

Only accusing Python indentation as being a bug is even sillier than
accusing Tcl comments -- the comments are a side effect, and could be
hacked away at some expense.  The indentation is a fundamental part of the
design of the language, and it's why I like Python so much, in spite of
all the languages (such as Lisp) with much better OO, scoping, and so on.

>Paul Duffin

-- 
-William "Billy" Tanksley
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3783305C.33FA@mailserver.hursley.ibm.com>
William Tanksley wrote:
> 
> On Mon, 05 Jul 1999 10:16:17 +0100, Paul Duffin wrote:
> >William Tanksley wrote:
> >> On Fri, 2 Jul 1999 22:36:52 +0100, Donal K. Fellows wrote:
> 
> >> >I think that is probably some of the wisest stuff I've read on c.l.t for
> >> >a long time, and there have been a lot of good posts recently.  Tclers
> >> >(especially the experienced ones that frequent the group) tend to
> >> >recognise the difference between the substance of semantics and the
> >> >clothing of syntax -
> 
> >> Tcl is a nice one for this, although I don't care for the inflexibility of
> >> its syntax.
> 
> >In what respect ?
> 
> You can add new words, but they have to sit within Tcl's syntax.  The
> arguments they take have to fit.  Even the arguments of their arguments
> have to fit.  And so on.  It makes even simple, obvious things like
> comments odd.
> 
> Lisp and Scheme have a much simpler syntax, so redefining syntax is easy
> if it's ever wanted.  Forth has no syntax, so it's trivial.
> 

So just how do you redefine Lisp and Scheme syntax ?
Can you make Lisp look like Tcl ?

> >> Reminds me of the people who keep assuming that Python's indentation is
> >> some kind of bug :-).
> 
> >Or that Tcl's comment is a bug.
> 
> Exactly.  It's the result of a deliberate design decision, with benefits
> and drawbacks of its own.  We want to study the design decision, not a
> single side effect.
> 
> Only accusing Python indentation as being a bug is even sillier than
> accusing Tcl comments -- the comments are a side effect, and could be
> hacked away at some expense.  The indentation is a fundamental part of the
> design of the language, and it's why I like Python so much, in spite of
> all the languages (such as Lisp) with much better OO, scoping, and so on.
> 

So what you are saying is the whitespace in Python was an explicit design
decision whereas the comment behaviour in Tcl is a side effect.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Tim Bradshaw
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <nkjvhbwitpi.fsf@tfeb.org>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> So just how do you redefine Lisp and Scheme syntax ?
> Can you make Lisp look like Tcl ?
> 

Lisp people typically mean making another parenthesized syntax by
redefining the syntax, because they don't care much about parens.
That's obviously very easy to do. However you can cook other syntaxes
in CL easily by changing the readtable.

As a small example I played with a toy `message passing' syntax for CL
which changes

	(car (cons 1 2))

to

	[[1 cons 2] car]

and similarly

	(+ (sin x) 2.0)

is
	[[x sin] + 2.0]

This syntax was a few lines of code to implement.

You can go much further along these lines by essentally cooking your
own parser for little languages and then embedding them within Lisp
syntax (or using them as an alternative syntax) -- these can be harder
than the little readtable changes like the above as you have to deal
with stuff like operator precedence for non-prefix languages.  There
are many examples of these things, often used to provide infix
arithmetic expresions in Lisp code:

	(if $sin(x) > 0.0$ ...)

None of these have really caught on though.

--tim
From: Hume Smith
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <931369133.199720930@news.glinx.com>
Paul Duffin <·······@mailserver.hursley.ibm.com> wrote:

>So just how do you redefine Lisp and Scheme syntax ?
>Can you make Lisp look like Tcl ?

it may vary a bit from Lisp to Lisp, but usually the reader is itself another
lisp function, and it either has hooks for hanging meanings off individual
characters or patterns or can be completely replaced.

i suppose one -coud- replace TCL's eval, if one really wanted too... i doubt
the change propages into the C code.  (ie you'd have to reimplement if, switch,
for, foreach, while, proc, etc...)

--
<URL:http://www.glinx.com/~hclsmith/>
From: William Tanksley
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <slrn7o7d7o.6la.wtanksle@dolphin.openprojects.net>
On Wed, 07 Jul 1999 11:47:56 +0100, Paul Duffin wrote:
>William Tanksley wrote:

>> On Mon, 05 Jul 1999 10:16:17 +0100, Paul Duffin wrote:
>> >William Tanksley wrote:

>> Lisp and Scheme have a much simpler syntax, so redefining syntax is easy
>> if it's ever wanted.  Forth has no syntax, so it's trivial.

>So just how do you redefine Lisp and Scheme syntax ?

You replace the reader.

>Can you make Lisp look like Tcl ?

Yes.  Or C, or Forth.  But you wouldn't want to -- far too much effort.

OTOH, I do know of a FORTRAN expression parser in Forth.  Very useful; it
simply outputs the Forth equivalent of a FORTRAN formula.  When you're
doing scientific calculations you want to keep things as close as possible
to the original document.

A similar effort in Tcl would be possible, but the user would have to
carefully avoid Tcl's special characters, and Tcl would not be able to
bytecode the result without changes to the Tcl compiler itself.  (Unless,
as is entirely possible, the Tcl optimizer has grown more sophisticated.)

>> >> Reminds me of the people who keep assuming that Python's indentation is
>> >> some kind of bug :-).

>> >Or that Tcl's comment is a bug.

>> Exactly.  It's the result of a deliberate design decision, with benefits
>> and drawbacks of its own.  We want to study the design decision, not a
>> single side effect.

>> Only accusing Python indentation as being a bug is even sillier than
>> accusing Tcl comments -- the comments are a side effect, and could be
>> hacked away at some expense.  The indentation is a fundamental part of the
>> design of the language, and it's why I like Python so much, in spite of
>> all the languages (such as Lisp) with much better OO, scoping, and so on.

>So what you are saying is the whitespace in Python was an explicit design
>decision whereas the comment behaviour in Tcl is a side effect.

Sort of.  Both are on the same scale: they're useful and deliberate design
decisions.  Whitespace in Python is more deliberate than comments in Tcl:
indentation is a fundamental part of Python's definition.  Comments which
don't always comment are not fundamental to Tcl, although the _reasons_
they do not comment are fundamental.

>Paul Duffin

-- 
-William "Billy" Tanksley
From: Donal K. Fellows
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7m1qq8$m70$1@m1.cs.man.ac.uk>
In article <·······················@dolphin.openprojects.net>,
William Tanksley <········@dolphin.openprojects.net> wrote:
> OTOH, I do know of a FORTRAN expression parser in Forth.  Very useful; it
> simply outputs the Forth equivalent of a FORTRAN formula.  When you're
> doing scientific calculations you want to keep things as close as possible
> to the original document.
> 
> A similar effort in Tcl would be possible, but the user would have to
> carefully avoid Tcl's special characters, and Tcl would not be able to
> bytecode the result without changes to the Tcl compiler itself.  (Unless,
> as is entirely possible, the Tcl optimizer has grown more sophisticated.)

Oh, you could bytecode the result.  Easily (the hard bit would be
*not* bytecoding the result...)  The charm is in cacheing the result
of the bytecoding for later reuse.  And that is an exercise for the
reader.  :^)

Donal.
-- 
Donal K. Fellows    http://www.cs.man.ac.uk/~fellowsd/    ········@cs.man.ac.uk
-- The small advantage of not having California being part of my country would
   be overweighed by having California as a heavily-armed rabid weasel on our
   borders.  -- David Parsons  <o r c @ p e l l . p o r t l a n d . o r . u s>
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3785B997.2001@mailserver.hursley.ibm.com>
William Tanksley wrote:
> 
> On Wed, 07 Jul 1999 11:47:56 +0100, Paul Duffin wrote:
> >William Tanksley wrote:
> 
> >> On Mon, 05 Jul 1999 10:16:17 +0100, Paul Duffin wrote:
> >> >William Tanksley wrote:
> 
> >> Lisp and Scheme have a much simpler syntax, so redefining syntax is easy
> >> if it's ever wanted.  Forth has no syntax, so it's trivial.
> 
> >So just how do you redefine Lisp and Scheme syntax ?
> 
> You replace the reader.
> 

I do not know what the reader is but it sounds as though you have to
create your own parser which converts from whatever language to Lisp.

> >Can you make Lisp look like Tcl ?
> 
> Yes.  Or C, or Forth.  But you wouldn't want to -- far too much effort.
> 
> OTOH, I do know of a FORTRAN expression parser in Forth.  Very useful; it
> simply outputs the Forth equivalent of a FORTRAN formula.  When you're
> doing scientific calculations you want to keep things as close as possible
> to the original document.
> 
> A similar effort in Tcl would be possible, but the user would have to
> carefully avoid Tcl's special characters, and Tcl would not be able to
> bytecode the result without changes to the Tcl compiler itself.  (Unless,
> as is entirely possible, the Tcl optimizer has grown more sophisticated.)
> 

Assume I have a Tcl command "fortran-expr" which takes a fortran expr
and converts it into Tcl code.

	set code [fortran-expr {some fortran expression}]
	eval $code

Then of course the fortran expression would have to obey Tcl's rules
about what can be inside {}.

So how is Lisp or Forth (of which I only remember postfix notation)
different to this ?

	(setq code (fortran-expr "some fortran expression"))
	(eval code)

The fortran expression has to obey Lisp's rules about what can be in
a string which are much more stringent than Tcl's {} as you cannot
even nest strings.

> >> >> Reminds me of the people who keep assuming that Python's indentation is
> >> >> some kind of bug :-).
> 
> >> >Or that Tcl's comment is a bug.
> 
> >> Exactly.  It's the result of a deliberate design decision, with benefits
> >> and drawbacks of its own.  We want to study the design decision, not a
> >> single side effect.
> 
> >> Only accusing Python indentation as being a bug is even sillier than
> >> accusing Tcl comments -- the comments are a side effect, and could be
> >> hacked away at some expense.  The indentation is a fundamental part of the
> >> design of the language, and it's why I like Python so much, in spite of
> >> all the languages (such as Lisp) with much better OO, scoping, and so on.
> 
> >So what you are saying is the whitespace in Python was an explicit design
> >decision whereas the comment behaviour in Tcl is a side effect.
> 
> Sort of.  Both are on the same scale: they're useful and deliberate design
> decisions.  Whitespace in Python is more deliberate than comments in Tcl:
> indentation is a fundamental part of Python's definition.  Comments which

Whitespaces are a fundamental part of Python's design but they add no
functional benefit, only syntactic benefit which as we all know is
subjective. It is equally possible to have a Python which uses {} as
block delimiters.

> don't always comment are not fundamental to Tcl, although the _reasons_
> they do not comment are fundamental.
> 

Once the decision was made that all Tcl commands are to be processed the
same and that it was up to the individual command how it uses its
arguments the behaviour of comments was defined.

Not breaking (or at least severely damaging with multiple exceptions) the above 
rule in order to implement a 'proper' comment was a decision which may have
been made explicitly or implicitly it does not really matter. The point is that
Tcl with a 'proper' comment would be a completely different language.

Therefore I would say that the behaviour of the Tcl comment is more
fundamental to Tcl than using whitespace for block definition is to Python
simply because of the cost involved in making a different decision.

I would say that the behaviour of the Tcl comment is a lot harder to understand
than how Python uses whitespace.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Espen Vestre
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <w6btdmw4mm.fsf@wallace.nextel.no>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> 	(setq code (fortran-expr "some fortran expression"))
> 	(eval code)
> 
> The fortran expression has to obey Lisp's rules about what can be in
> a string which are much more stringent than Tcl's {} as you cannot
> even nest strings.

There are no limits to what can be in a lisp string - you are
confusing syntax and semantics!  (Of course, if you actually
want to represent a string containing a fortran program with
the string constant syntax of lisp, you have to obey the
quoting rules, but why would you want to do that?)

-- 

  espen
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3789ABAB.1E71@mailserver.hursley.ibm.com>
Espen Vestre wrote:
> 
> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> 
> >       (setq code (fortran-expr "some fortran expression"))
> >       (eval code)
> >
> > The fortran expression has to obey Lisp's rules about what can be in
> > a string which are much more stringent than Tcl's {} as you cannot
> > even nest strings.
> 
> There are no limits to what can be in a lisp string - you are
> confusing syntax and semantics!  (Of course, if you actually
> want to represent a string containing a fortran program with
> the string constant syntax of lisp, you have to obey the
> quoting rules, but why would you want to do that?)
> 

Of course you can have anything inside a lisp string but to actually
get that information into the string involves more than simply wrapping
it with "", you have to ensure that any "s inside the string are \
quoted properly and also that any \ are \ quoted properly.

The point I was trying to make was that Lisp has similar problems to 
Tcl when it comes to placing arbitrary stuff into a string which is
something that William Tanksley did not seem to know when he wrote:

* A similar effort in Tcl would be possible, but the user would have to
* carefully avoid Tcl's special characters

Tcl does in fact have more problems than Lisp in this regard because of
its syntax rules (hence the comment problem) but as we agree Lisp is not
without its own.

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: William Tanksley
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <slrn7okdnn.ep3.wtanksle@dolphin.openprojects.net>
On Mon, 12 Jul 1999 09:47:39 +0100, Paul Duffin wrote:
>Espen Vestre wrote:
>> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

>> >       (setq code (fortran-expr "some fortran expression"))
>> >       (eval code)

>> > The fortran expression has to obey Lisp's rules about what can be in
>> > a string which are much more stringent than Tcl's {} as you cannot
>> > even nest strings.

>> There are no limits to what can be in a lisp string - you are
>> confusing syntax and semantics!  (Of course, if you actually
>> want to represent a string containing a fortran program with
>> the string constant syntax of lisp, you have to obey the
>> quoting rules, but why would you want to do that?)

>The point I was trying to make was that Lisp has similar problems to 
>Tcl when it comes to placing arbitrary stuff into a string which is
>something that William Tanksley did not seem to know when he wrote:

>* A similar effort in Tcl would be possible, but the user would have to
>* carefully avoid Tcl's special characters

Of course neither language can put arbitrary stuff into a syntactic
string.  That's why I said that Lisp would use reader macros, which is
something that Tcl doesn't have.

>Tcl does in fact have more problems than Lisp in this regard because of
>its syntax rules (hence the comment problem) but as we agree Lisp is not
>without its own.

I can't argue that any language, including Lisp, is completely without
problems :), but I do argue that Tcl's approach to language customization
is not anywhere near as flexible as Lisp's.

But I'm a silly one to argue -- I use Python in preference to Tcl, and
Python is far less customizable than Tcl (which is why I like it).  I've
never yet written a reader macro in Lisp, and I doubt I ever will
(although I am curious as to whether an indent/dedent based syntax might
be amusing in Lisp).

>Paul Duffin

-- 
-William "Billy" Tanksley
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <877lo5hau4.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@dolphin.openprojects.net (William Tanksley) writes:

> But I'm a silly one to argue -- I use Python in preference to Tcl, and
> Python is far less customizable than Tcl (which is why I like it).  I've
> never yet written a reader macro in Lisp, and I doubt I ever will
> (although I am curious as to whether an indent/dedent based syntax might
> be amusing in Lisp).

It would seem to me that a indent/dedent based syntax might mix quite
well with Lisp, since Lisp programmers tend to read programs based on
their indentation anyway (ignoring parens).  So ensuring that the
seen indentation and the program structure match (by definition) might 
have it's advantages.  OTOH Lisp source should continue to be
represented internally as lists, so what this boils down to would be an
algorithm for writing/reading nested lists without delimiters but with
indentation.  An interesting idea, though I rather suspect that it
would be quite wasteful on screen real-estate, and probably a bit
confusing.  But that would be a nice toy project...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Hume Smith
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <931875498.9094811@news.glinx.com>
····@acm.org (Pierre R. Mai) wrote:
>········@dolphin.openprojects.net (William Tanksley) writes:

>> (although I am curious as to whether an indent/dedent based syntax might
>> be amusing in Lisp).

>It would seem to me that a indent/dedent based syntax might mix quite
>well with Lisp, since Lisp programmers tend to read programs based on
>their indentation anyway (ignoring parens).

given how much more lisp is "expressions" than "statements" - how much more
nested things get in Lisp than languages like Python or C - i think it'd be
tricky to pull off.

>  "One smaller motivation which, in part, stems from altruism is Microsoft-
>   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

O.o  i missed that one - HTF do they think that has anything to do with
altruism?  do they even know what the word means?

--
<URL:http://www.glinx.com/~hclsmith/>
From: Christopher Browne
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <n7vi3.21348$AU3.605286@news2.giganews.com>
On Mon, 12 Jul 1999 18:41:59 GMT, William Tanksley
<········@dolphin.openprojects.net> wrote: 
>(although I am curious as to whether an indent/dedent based syntax might
>be amusing in Lisp).

That would provide opportunity for the implementation to be criticized
inanely by *absolutely everyone.*

Not only those that see Lisp as "too many parens," but also those that
knee-jerk into thinking that Python must be worthless due to
"indentation-sensitivity."

-- 
"I am a bomb technician. If you see me running, try to keep up..."
········@ntlug.org- <http://www.ntlug.org/~cbbrowne/langlisp.html>
From: Garrett G. Hodgson
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <378CCAD2.97D4FEA7@research.att.com>
Christopher Browne wrote:
> 
> On Mon, 12 Jul 1999 18:41:59 GMT, William Tanksley
> <········@dolphin.openprojects.net> wrote:
> >(although I am curious as to whether an indent/dedent based syntax might
> >be amusing in Lisp).
> 
> That would provide opportunity for the implementation to be criticized
> inanely by *absolutely everyone.*
> 
> Not only those that see Lisp as "too many parens," but also those that
> knee-jerk into thinking that Python must be worthless due to
> "indentation-sensitivity."

if you could only add in the required use of all combinations of
[$&@_;{}] as well, you might have the perfect language for flamebait.

-- 
Garry Hodgson			comes a time
·····@sage.att.com		when the blind man
Software Innovation Services	takes your hand, says,
AT&T Labs			"don't you see?"
From: Christopher Browne
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <dkbj3.37476$AU3.735796@news2.giganews.com>
On Wed, 14 Jul 1999 17:37:22 GMT, Garrett G. Hodgson
<·····@research.att.com> wrote: 
>Christopher Browne wrote:
>> On Mon, 12 Jul 1999 18:41:59 GMT, William Tanksley
>> <········@dolphin.openprojects.net> wrote:
>> >(although I am curious as to whether an indent/dedent based syntax might
>> >be amusing in Lisp).
>> 
>> That would provide opportunity for the implementation to be criticized
>> inanely by *absolutely everyone.*
>> 
>> Not only those that see Lisp as "too many parens," but also those that
>> knee-jerk into thinking that Python must be worthless due to
>> "indentation-sensitivity."
>
>if you could only add in the required use of all combinations of
>[$&@_;{}] as well, you might have the perfect language for flamebait.

You mean I *forgot* to indicate that the type of each object needs to
be attributed with its unique punctuation?

How forgetful of me.

defun myfunc
   let
       name  @_[1]
       dest  @_[2]
     if
        exist-p $dest
	print "$name" >> *dest
        error (string-append "Error: " $dest" " does not exist!\n")
-- 
"Terrrrrific." -- Ford Prefect
········@ntlug.org- <http://www.ntlug.org/~cbbrowne/langscript.html>
From: Daniel Barlow
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <m3n1wxo5z1.fsf@detached.demon.co.uk>
········@news.hex.net (Christopher Browne) writes:
> You mean I *forgot* to indicate that the type of each object needs to
> be attributed with its unique punctuation?

It would help if you were also to overload the + operator to do string
concatenation.  Oh, and silently coerce things to strings.

-dan
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lw1zed2dyg.fsf@copernico.parades.rm.cnr.it>
········@dolphin.openprojects.net (William Tanksley) writes:

	...

> I can't argue that any language, including Lisp, is completely without
> problems :), but I do argue that Tcl's approach to language customization
> is not anywhere near as flexible as Lisp's.
> 
> But I'm a silly one to argue -- I use Python in preference to Tcl, and
> Python is far less customizable than Tcl (which is why I like it).  I've
> never yet written a reader macro in Lisp, and I doubt I ever will
> (although I am curious as to whether an indent/dedent based syntax might
> be amusing in Lisp).

Have you ever seen PPRINT? :)

BTW. I have a Java implementation if anyone needs it (send me mail in
private).

Cheers


-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Paul Duffin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3789C260.1EF6@mailserver.hursley.ibm.com>
Paul Duffin wrote:
> 
> Espen Vestre wrote:
> >
> > Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> >
> > >       (setq code (fortran-expr "some fortran expression"))
> > >       (eval code)
> > >
> > > The fortran expression has to obey Lisp's rules about what can be in
> > > a string which are much more stringent than Tcl's {} as you cannot
> > > even nest strings.
> >
> > There are no limits to what can be in a lisp string - you are
> > confusing syntax and semantics!  (Of course, if you actually
> > want to represent a string containing a fortran program with
> > the string constant syntax of lisp, you have to obey the
> > quoting rules, but why would you want to do that?)
> >
> 
> Of course you can have anything inside a lisp string but to actually
> get that information into the string involves more than simply wrapping
> it with "", you have to ensure that any "s inside the string are \
> quoted properly and also that any \ are \ quoted properly.
> 
> The point I was trying to make was that Lisp has similar problems to
> Tcl when it comes to placing arbitrary stuff into a string which is
> something that William Tanksley did not seem to know when he wrote:
> 
> * A similar effort in Tcl would be possible, but the user would have to
> * carefully avoid Tcl's special characters
> 
> Tcl does in fact have more problems than Lisp in this regard because of
> its syntax rules (hence the comment problem) but as we agree Lisp is not
> without its own.
> 

Of course Lisp readmacros remove the problem of encapsulating arbitrary
data in a Lisp program because the readmacro is responsible for the
parsing.

Lisp is too complicated for me, I think I will stick to Tcl :)

-- 
Paul Duffin
DT/6000 Development	Email: ·······@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880	International: +44 1962-816880
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lw1zeersas.fsf@copernico.parades.rm.cnr.it>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> Paul Duffin wrote:
> > 
> Of course Lisp readmacros remove the problem of encapsulating arbitrary
> data in a Lisp program because the readmacro is responsible for the
> parsing.
> 
> Lisp is too complicated for me, I think I will stick to Tcl :)

It is simplicity that is difficult to make.  Dump Tcl and embrace
parenthesis. You will see your garbage be collected :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Pierre R. Mai
Subject: Reader macros in Common Lisp (Was Re: Why is tcl broken?)
Date: 
Message-ID: <873dyuhyg8.fsf_-_@orion.dent.isdn.cs.tu-berlin.de>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> The point I was trying to make was that Lisp has similar problems to 
> Tcl when it comes to placing arbitrary stuff into a string which is
> something that William Tanksley did not seem to know when he wrote:
> 
> * A similar effort in Tcl would be possible, but the user would have to
> * carefully avoid Tcl's special characters
> 
> Tcl does in fact have more problems than Lisp in this regard because of
> its syntax rules (hence the comment problem) but as we agree Lisp is not
> without its own.

The point of William Tanksley, I think, was that in Lisp you don't
have to stuff arbitrary stuff into a string to implement a processor
for some kind of fancy embedded language, like Fortran.  Instead of
stuffing the programming text into strings, you would define a reader
macro and hang it off some macro character to implement the processing 
directly.  Since macro characters can read anything they want of the
stream, in any way they like, and may decide with any algorithm when
to end reading, you don't have the problems of needing to quote
anything.

So Common Lisp can in reality chicken out of this problem quite
easily.  If one really wanted to read Fortran verbatim in CL is of
course open for contention.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Fernando Mato Mira
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3785C6DC.944D1EBB@iname.com>
Paul Duffin wrote:

>
>         (setq code (fortran-expr "some fortran expression"))
>         (eval code)
>
> The fortran expression has to obey Lisp's rules about what can be in
> a string which are much more stringent than Tcl's {} as you cannot
> even nest strings.

One would use a reader macro, eg:

#F(whatever stuff without a dangling right paren)

[I've shown a dispatch macro char. Single chars should be used for frequent (and
`lispy') purposes.]

Just that. No need to call eval. You should only call eval when there's no other
way (or we'll hit you with a big paren ;-)).

See
http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/lisp/code/syntax/infix/
[I should add this to my .signature]
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <87wvwahtjr.fsf@orion.dent.isdn.cs.tu-berlin.de>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> I do not know what the reader is but it sounds as though you have to
> create your own parser which converts from whatever language to Lisp.

You always have to do that if you want to parse another language,
whether by naively embedding it into Lisp, or writing a compiler, or
whatever.  Of course Lisp let's you get by without writing a whole
parser, if you embed a lispified version of the other language.

> Assume I have a Tcl command "fortran-expr" which takes a fortran expr
> and converts it into Tcl code.
> 
> 	set code [fortran-expr {some fortran expression}]
> 	eval $code
> 
> Then of course the fortran expression would have to obey Tcl's rules
> about what can be inside {}.
> 
> So how is Lisp or Forth (of which I only remember postfix notation)
> different to this ?

In Lisp, given that you can come up with any rule at all where to end
the fortran-like processing, you can write a reader-macro that can
read the fortran expression verbatim, _without_ caring about any Lisp
syntax rules.  I.e.

(#T{some tcl expressions with unbalanced parentheses ) and strings "22" })

Or if the language doesn't have it's own balanced delimiter, like
e.g. Pascal, you can do something like this:

(#P program pascal-program 
procedure dummy;
begin
end;

begin
  dummy;
end.
)

This is all possible, since reader-macros can do whatever they want on 
the stuff they read themselves off the stream...

Tcl can't match this, because Tcl's commands (like Lisp's macros) only 
get to touch their arguments after they have been parsed...

Regs, Pierre.
(who'll take expressivity over uniformity any day of the week ;)

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Fernando Mato Mira
Subject: Readmacros (was: Why is tcl broken?)
Date: 
Message-ID: <37860681.3A3D2918@iname.com>
One little thing:

Please do not use #F (as I showed) or #T (as Pierre showed).
Those should be kept for Scheme-on-CL implementations (eg: pseudoscheme).

Which brings me to:

1. Has anybody compiled a "Who's Who" of reader macros?
2. For ANSI 2, how about extending the dispatching readmacro namespace to things
of the form,
   say:

    ##foo(.............)
From: Heribert Dahms
Subject: Re: Readmacros (was: Why is tcl broken?)
Date: 
Message-ID: <7m5l1o$nva$1@news.rz.uni-karlsruhe.de>
In <·················@iname.com> ········@iname.com writes:

: 2. For ANSI 2, how about extending the dispatching readmacro namespace to
: things of the form,
:    say:
: 
:     ##foo(.............)

Does that mean that in Lisp a closing paren is required and
embedded parens need to be balanced or quoted?

In Forth, it's possible to have
foo .....8-)..... really everything, maybe except ASCII NUL
as long as foo knows when to finish reading the input stream.

In Tcl, as already mentioned, it's impossible, cause cmdline is pre-parsed.


Bye, Heribert (·····@ifk20.mach.uni-karlsruhe.de)
From: Pierre R. Mai
Subject: Re: Readmacros (was: Why is tcl broken?)
Date: 
Message-ID: <87n1x5a75b.fsf@orion.dent.isdn.cs.tu-berlin.de>
·····@ifk20.mach.uni-karlsruhe.de (Heribert Dahms) writes:

> In <·················@iname.com> ········@iname.com writes:
> 
> : 2. For ANSI 2, how about extending the dispatching readmacro namespace to
> : things of the form,
> :    say:
> : 
> :     ##foo(.............)
> 
> Does that mean that in Lisp a closing paren is required and
> embedded parens need to be balanced or quoted?
> 
> In Forth, it's possible to have
> foo .....8-)..... really everything, maybe except ASCII NUL
> as long as foo knows when to finish reading the input stream.

The situation is identical in Common Lisp:  A reader macro can read
anything it wants to from the stream, and do with it anything it wants 
to, either.   Line and block comments are usually implemented as
reader macros, and they don't take balanced parens into account...

OTOH you try not to stray to far outside Lisp syntax in your reader
macros for two reasons:  Users/Editors are used to balanced parens and 
other Lisp-like syntax conventions, and you might be able to reuse
large parts of the existing reader/package-system/whatever...

Since we like our syntax, we don't violate it unnecessarily.  OTOH, if 
we really want to, we can, which is sometimes important...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Fernando Mato Mira
Subject: Re: Readmacros (was: Why is tcl broken?)
Date: 
Message-ID: <37899DF3.A22FFD38@iname.com>
Heribert Dahms wrote:

> In <·················@iname.com> ········@iname.com writes:
>
> : 2. For ANSI 2, how about extending the dispatching readmacro namespace to
> : things of the form,
> :    say:
> :
> :     ##foo(.............)
>
> Does that mean that in Lisp a closing paren is required and
> embedded parens need to be balanced or quoted?

It was an example. Any delimiter that marks the end of a symbol should
do. Whether you need ()s, {}s, etc. depends on what you want to put in .......
From: Donal K. Fellows
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7m5405$scs$1@m1.cs.man.ac.uk>
In article <··············@orion.dent.isdn.cs.tu-berlin.de>,
Pierre R. Mai <····@acm.org> wrote:
> Regs, Pierre.
> (who'll take expressivity over uniformity any day of the week ;)

So you're a fan of Perl, yes?  (Cheap shot, no? :^)

Donal.
-- 
Donal K. Fellows    http://www.cs.man.ac.uk/~fellowsd/    ········@cs.man.ac.uk
-- The small advantage of not having California being part of my country would
   be overweighed by having California as a heavily-armed rabid weasel on our
   borders.  -- David Parsons  <o r c @ p e l l . p o r t l a n d . o r . u s>
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <87d7y1a6oo.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@cs.man.ac.uk (Donal K. Fellows) writes:

> In article <··············@orion.dent.isdn.cs.tu-berlin.de>,
> Pierre R. Mai <····@acm.org> wrote:
> > Regs, Pierre.
> > (who'll take expressivity over uniformity any day of the week ;)
> 
> So you're a fan of Perl, yes?  (Cheap shot, no? :^)

Ahh, that would presuppose that I think that Perl is expressive.  That
assumption is wrong.  Furthermore I think that Perl is not very
expressive at all (outside the very narrow area of RegExps and
Awk/Sed/Shell like behaviour).  If a language uses hash-tables
directly as a replacement for structures, objects and what not, I
think it doesn't care much about expressivity (which is much more
about the things I can communicate directly and precisely to fellow
programmers, than what can be computed by the language in the shortest
possible piece of code).

But when comparing Tcl and Perl (and we've had quite a bit of code in
both languages in a project I took over two years ago), we've had far
less problems with maintaining the Perl code, than with maintaining
the Tcl code, which seems basically write-only, unless very clearly
documented...  So I think that Perl is quite a bit more expressive
than Tcl...  Note that this is of course only annecdotal evidence...

Hmmm, I think this thread has reached it's useful end of life...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Hume Smith
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <931582716.1670958469@news.glinx.com>
····@acm.org (Pierre R. Mai) wrote:

>But when comparing Tcl and Perl (and we've had quite a bit of code in
>both languages in a project I took over two years ago), we've had far
>less problems with maintaining the Perl code, than with maintaining
>the Tcl code, which seems basically write-only, unless very clearly
>documented...

really?  i know a lot of people who use perl who think it's exactly the
reverse; and every time i've tried bending my head around perl's bizarre
idears i eventually need a quiet vacation.

but that's me; i like glue languages like tcl and lisp over ones that put
everything into grammar... Some people prefer it the other way.  s'long's i
never have to work with 'em , we'll be alright.

--
<URL:http://www.glinx.com/~hclsmith/>
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <87u2rc910k.fsf@orion.dent.isdn.cs.tu-berlin.de>
Hume Smith <········@glinx.deliver-me-from-evil.com> writes:

> ····@acm.org (Pierre R. Mai) wrote:
> 
> >But when comparing Tcl and Perl (and we've had quite a bit of code in
> >both languages in a project I took over two years ago), we've had far
> >less problems with maintaining the Perl code, than with maintaining
> >the Tcl code, which seems basically write-only, unless very clearly
> >documented...
> 
> really?  i know a lot of people who use perl who think it's exactly the
> reverse; and every time i've tried bending my head around perl's bizarre
> idears i eventually need a quiet vacation.

Well, I don't particularly like Perl's syntax as well, and I dislike
Tcl's "evaluation rules".  But on the whole, I couldn't care less
about Tcl vs. Perl, because the answer is not Mu, but CL ;).

But I can't dodge the "real-world" observation, that we really had
less problems with maintaining Perl code than with Tcl code.  You
can't extrapolate from that of course, but that's been my observation.

Again that has been little influenced by my preferences, but more by
my/our experiences.  Interestingly, we even had more problems
maintaining the C++ code of the project, which was one of the main
reasons that the new simulation is now in CL (OTOH, the C++ code in
question was of quite low quality, with very shallow inheritance,
undocumented interfaces/assumptions all over the place, and 3
cooperating processes with a similar mess of interaction[1], so C++
isn't always worse in this regard).

Back to Lisp:  With over 10 years of experience with C++, I still find 
that I can read/use/maintain bad CL code much better than bad C++
code.  And I think there is relatively less bad CL code out there than 
C++ code.

> but that's me; i like glue languages like tcl and lisp over ones that put
> everything into grammar... Some people prefer it the other way.  s'long's i

I, too, prefer languages that are thin on grammar.  I don't like
languages that are thin on concepts.

Regs, Pierre.

Footnotes: 
[1]  Debugging such a beast given the usual C-level tools can be quite
challenging...

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: William Deakin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <378722E0.C6B7EE02@pindar.com>
"Pierre R. Mai" wrote:

> ...If a language [perl] uses hash-tables directly as a replacement for
> structures, objects and what not, I
> think it doesn't care much about expressivity (which is much more about the
> things I can communicate directly and precisely to fellow programmers, than
> what can be computed by the language in the shortest possible piece of code).

I don't understand this point. perl contains three basic data structures that
can be used to represent 'structures, objects and what not': lists/arrays,
hash-tables and typeglobs. You can use all these data types to represent and
create OO data structures. Using packages (read classes/namespaces) you can
create quite sophisticated OO systems: (pre and post methods, autoloading of
functions, operation overloading ...). This I think gives you the ability to
express yourself more in perl than you than may think.

Having looked at CL and perl, there is no way that I can argue that perl is as
expressive as CL, perl is much more expressive than some may have us believe.

:-) will
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <87wvw891ur.fsf@orion.dent.isdn.cs.tu-berlin.de>
William Deakin <·····@pindar.com> writes:

> I don't understand this point. perl contains three basic data
> structures that can be used to represent 'structures, objects and
> what not': lists/arrays, hash-tables and typeglobs. You can use all
> these data types to represent and create OO data structures. Using
> packages (read classes/namespaces) you can create quite
> sophisticated OO systems: (pre and post methods, autoloading of
> functions, operation overloading ...). This I think gives you the
> ability to express yourself more in perl than you than may think.

I don't want to go to deeply into this, but to me expressivity does not
mean the ability to implement.  Sure I can use lists/arrays, hash-tables
or typeglobs to _implement_ objects (most CL implementations use vectors
a.k.a. arrays to implement structures/objects deep-down).  The point is
whether the user has to know that, i.e. whether there is an insulation 
layer between the implementation and the user.  

Why is this so important?  This insulation layer or in software-speak
interface has the same advantages as all other interfaces in program
building:  It gives you a clean abstraction, that allows you to
communicate the intent of your code clearly and unambiguously without
resorting to patterns/idioms[1].  It gives the implementation (or the
implementation hacker) lenience in to change low-level representations 
or algorithms.  It gives the user a specification for the feature in
higher-level terms that he understands and cares about, not resorting
to low-level implementation artifacts.  Etc.

If this is a good design strategy for applications and libraries, it
should be a good design strategy for language designers.

If we compare e.g. the design of objects in Perl, C++ and CL (with the
MOP), then we can see that Perl gives you more or less the direct
low-level implementation, striving to include as little as possible of
new OO-machinery (IIRC this was one of the main design criterions of
the OO system in Perl5, and I think there was some pride on the small
number of changes from Perl4 needed).  C++ OTOH carries over C's
struct design, which is fairly high-level, adding needed OO machinery
in the same style (e.g. inheritance is declarative), but doesn't allow
you any ordered way of getting at the low-level implementation
(besides the omni-present ability to diddle pointers and bits in C).

CLOS gives you a high-level declarative interface, but (together with
the MOP) allows you great flexibility in inspecting and influencing
the low-level implementation.   Some days ago I've written a
mock-persistent (i.e. the persistency is limited by the live-time of
the image, an idea that Kelly Murray posted here some time ago) "OODB" 
for CLOS using the MOP in 266 LOC, staying completely within CL, which 
uses the same interface as normal classes, with a declarative way of
specifiying indices, etc.

I don't doubt that a similar thing could be done in Perl (in C++ you'd 
need to hack the implementation, or provide a pre-processor, IMHO),
but I doubt whether the interface would/could be as declarative.

IMHO a good discussion of this design-principle (i.e. what to expose
of the implementation, what not, and how) can be found in the works of
Gregor Kiczales et al., for example in "The Art of the Meta-object
Protocol" (AMOP), or his later work on Open Implementation or
Aspect-Oriented Programming[2].

> Having looked at CL and perl, there is no way that I can argue that
> perl is as expressive as CL, perl is much more expressive than some
> may have us believe.

Oh, there are worse languages than Perl in this regard.  It just means 
that I don't have to love Perl, and that I use/seek better
alternatives when possible...

Regs, Pierre.

Footnotes: 
[1]  Patterns and idioms are needed everywhere, where there is no
direct interface for some functionality.  As such they are the best
solution when no interface can be built.  But IMHO language designers
should look at each (low-level) pattern or idiom as a deficiency in the
expressivity of their language, and should consider whether there is a 
way to include an interface for the needed functionality, or better
yet, to provide tools to the user that allow him to package the needed 
functionality himself.  CL's macros and the MOP are examples of such
tools, which save CL the need for many patterns.  Of course this is
just one design-criterion for a programming language, and the designer 
has to balance many of them to produce a workable language.  But it is 
IMHO an important one.

[2]  See http://www.parc.xerox.com/spl/projects/oi/ieee-software/ or
somewhere below/above that link and in the vincinity...

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: William Deakin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <37875BB8.F81DB391@pindar.com>
"Pierre R. Mai" wrote:

> The point is whether the user has to know that, i.e. whether there is an
> insulation layer between the implementation and the user. Why is this so
> important?  This insulation layer or in software-speak interface has the same
> advantages as all other interfaces in program building:  It gives you a clean
> abstraction, that allows you to communicate the intent of your code clearly
> and unambiguously without resorting to patterns/idioms[1].

Even looking at note [1] I am confused by this! What is wrong with patterns or
idioms? Also could you explain what is the difference between a low-level
pattern or idiom and a high level one? In my humble experience, the use of
design patterns or idioms makes development easier by giving a high level
abstraction which can then be altered or changed as required to solved problems.

>  ... If this is a good design strategy for applications and libraries, it
> should be a good design strategy for language designers.

This is all pretty standard software engineering stuff. Or so I thought!? ;-)

> If we compare e.g. the design of objects in Perl, C++ and CL (with the MOP),
> then we can see that Perl gives you more or less the direct low-level
> implementation, striving to include as little as possible of new OO-machinery
> (IIRC this was one of the main design criterions of the OO system in Perl5,
> and I think there was some pride on the small number of changes from Perl4
> needed). .... CLOS gives you a high-level declarative interface, but (together
> with the MOP) allows you great flexibility in inspecting and influencing the
> low-level implementation.

But it is easy to then write your own packages in perl, on top of the low-level
perl implementation, to give you the high level extraction required. You have
access to low level functions and due to the easy scripting/dynamic evaluation
of perl why not 'roll-your-own' object system. Like you can in CL!

I am also unsure about the abbreviation IIRC.

> Some days ago I've written a mock-persistent (i.e. the persistency is limited
> by the live-time of
> the image, an idea that Kelly Murray posted here some time ago) "OODB" for
> CLOS using the MOP in 266 LOC, staying completely within CL, which uses the
> same interface as normal classes, with a declarative way of specifiying
> indices, etc.

I'm sorry, but you lost me here.

> I don't doubt that a similar thing could be done in Perl (in C++ you'd need to
> hack the implementation, or provide a pre-processor, IMHO), but I doubt
> whether the interface would/could be as declarative.

Probably not. However, it could be done.

> IMHO a good discussion of this design-principle (i.e. what to expose of the
> implementation, what not, and how) can be found in the works of Gregor
> Kiczales et al., for example in "The Art of the Meta-object Protocol" (AMOP),
> or his later work on Open Implementation or Aspect-Oriented Programming[2].

Thank you for you pointer. I will dig these works out and look forward to
reading them with some anticipation. Thank you for the web reference also.

> Oh, there are worse languages than Perl in this regard.  It just means that I
> don't have to love Perl, and that I use/seek better alternatives when
> possible...

I would hope that my remarks don't indicate that I do love perl. Perl is a tool
and as a tool wielding animal I use what ever is to hand. I also seek and use
better alternatives but will use perl when appropriate. However, through
personal experience I have found perl to a stronger and more readily adaptable
OO implementation than may be though by others.

But then again I cannot claim to be an OO expert. In fact I believe myself not
to be any kind of expert in much anything (although after a few pints on a
friday night I might argue the case about some stuff to do with the optics of
anisotropic crystals).

Best regards,

:-) will
From: Tim Bradshaw
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <ey3wvw7cp1j.fsf@lostwithiel.tfeb.org>
* William Deakin wrote:
> "Pierre R. Mai" wrote:
>> The point is whether the user has to know that, i.e. whether there is an
>> insulation layer between the implementation and the user. Why is this so
>> important?  This insulation layer or in software-speak interface has the same
>> advantages as all other interfaces in program building:  It gives you a clean
>> abstraction, that allows you to communicate the intent of your code clearly
>> and unambiguously without resorting to patterns/idioms[1].

> Even looking at note [1] I am confused by this! What is wrong with
> patterns or idioms? Also could you explain what is the difference
> between a low-level pattern or idiom and a high level one? 

I think the point he is trying to make is that some people think that
a lot of the currently-fashionable (is it still?) design-patterns
stuff is really a way of working around the fact that the
implementation language does not provide the concepts you need
directly.  In a language which provides these ideas directly, or which
can be build up to do so, you tend not to resort to patterns & idioms
but to construct what you need directly in the language.

This is one of the reasons (the claim is) why languages like C++ &
Java, which are low-level and not easily extensible, tend to be rife
with patterns, while languages like Lisp tend not to be.  Patterns may
appear for a time in Lisp, but the language absorbs them into itself.

> In my
> humble experience, the use of design patterns or idioms makes
> development easier by giving a high level abstraction which can then
> be altered or changed as required to solved problems.

So in this case, in Lisp, you'd typically concoct some bit of syntax
via macros which described what you were trying to do, and then the
implementation would change underneath.  Lisp still does have patterns
but they tend to be meta-level ones which look something like `to
implement this  pattern (bit of syntax) you can use this kind of
trick'.

A good example in Lisp is the kind of WITH-x idea.  In a lower-level
language you might need this notion of `setup some stuff, do x then
clean up, even if x fails', and that would be some kind of pattern,
because the language is not powerful enough to express it.  In Lisp,
you'd write something like:

	(with-x (...)
	   ...)

which has reduced this pattern to a bit of syntax.  And the meta-level
pattern here is that a WITH-x macro can be conveniently implemented in
terms of a CALL-WITH-x function:

	(with-x (...)
	   ...)

	->

	(call-with-x #'(lambda (...) ...) ...)

And you can expose this CALL-WITH-x function itself if you like to
give a more programmatic interface to the functionality.  The classic
geological example of this meta-pattern, of course, is LET and LAMBDA:

	(let ((x 1)) ...)

	->

	(funcall #'(lambda (x) ...) 1)

--tim
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <87btdiiq70.fsf@orion.dent.isdn.cs.tu-berlin.de>
Tim Bradshaw <···@tfeb.org> writes:

> I think the point he is trying to make is that some people think that
> a lot of the currently-fashionable (is it still?) design-patterns
> stuff is really a way of working around the fact that the
> implementation language does not provide the concepts you need
> directly.  In a language which provides these ideas directly, or which
> can be build up to do so, you tend not to resort to patterns & idioms
> but to construct what you need directly in the language.

[ rest of very good explanation elided ]

Yes, that was more or less the point I was feebly trying to make in my 
muddled ways.  Thank you for following up on my behalf...

There is one thing I feel I have to add, that might not have come
across in my original posting:  I don't think that patterns and idioms 
are bad things.  Indeed, I think that patterns and idioms are quite
good ways of trying to bridge the gap between human-level concepts and
reasoning on the one hand and computer-language-level concepts on the
other hand.  Patterns and idioms can be very powerful, but at a cost: 
They are essentially write-time[1] mechanisms, so they cost programmer
think time.  There is also (at present) little support for Patterns
from the usual programming environments or languages.  Patterns must
be debugged/verified at every instantiation, and are therefore more
error-prone than ready-made constructs, etc.

The extension mechanisms in programming languages don't suffer from
those problems.  If I write a macro, I only have to debug it once, and
the user doesn't have to think about the inner workings of that macro.
The problem is of course that programming languages ever only reach so
far, and then more informal mechanisms like patterns or idioms have to
take over.

When comparing languages, you can see though, that some languages need
patterns and idioms at a far lower level than other languages.  To me
that is always an indication that the language in question isn't as
expressive as it could be.  Take the patterns in the gang of four
book.  Many of those are needed in C++ to overcome certain limitations 
of the language (like e.g. missing multiple dispatch, a limited static 
type system, etc.).  Common Lisp either offers integrated mechanisms
or is extensible in such a way, that most of the patterns in that book 
can be implemented directly.

To summarize: I feel that patterns work best at the higher levels
of implementation and design, where programming languages can't
reach, and that low-level patterns are a call to action for language
designers.

Regs, Pierre.

Footnotes: 
[1]  This is the time the program is written (or read/modified by the
programmer for that matter).  Given the different times of evaluation 
present in (Common) Lisp, the order would be write-time, read-time,
compile-time, run-time, ...

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Tim Bradshaw
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <ey3u2racrt4.fsf@lostwithiel.tfeb.org>
* Pierre R Mai wrote:
> There is also (at present) little support for Patterns
> from the usual programming environments or languages.  Patterns must
> be debugged/verified at every instantiation, and are therefore more
> error-prone than ready-made constructs, etc.

This is a really good point I think.  A lot of the pattern-y type
things that you might want to have can be *very* prone to obscure
bugs, If you write a macro then you have to get it right, but you only
ever have to do that once, while if you use a pattern you have get it
right, or not, every time.

--tim
From: William Deakin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3789B111.8D9D0222@pindar.com>
> Patterns and idioms can be very powerful, but at a cost: They are essentially
> write-time[1] mechanisms, so they cost programmer think time.  There is also
> (at present) little support for Patterns
> from the usual programming environments or languages.  Patterns must be
> debugged/verified at every instantiation, and are therefore more error-prone
> than ready-made constructs, etc.

I think this is an excellent point. But, if you have a difficult problem to
solve, doesn't using a frame work help you to solve the problem at hand? This
then saves time and thinking. There are also questions of maintainance: I have
found code written with patterns tends to be clearer because people have spent
time thinking and not banging away at the keyboard like the preverbial infinite
number of monkies.

Maybe this is just me but the way I code is not to try and start from scratch
but to beg, borrow or steal something else that does the job similar to what you
want to do and then adjust it. However, I do not have experience of large scale
CL development and the methods used may be very different.

> The extension mechanisms in programming languages don't suffer from those
> problems.  If I write a macro, I only have to debug it once, and the user
> doesn't have to think about the inner workings of that macro.

I accept the power of the macro, but lisp isn't the only language where you can
write functional sub-routines. But, how (conceptually) is this different from
writing a library or package that you can use again? Apart from the obvious
(like it is alot harder to write a package or library) that is.

> ...I feel that patterns work best at the higher levels of implementation and
> design, where programming languages can't reach...

Being flippant, doesn't this sounds like a commercial for a type of Danish
larger. ;-)

[the rest of this excellent message has been omitted]

Best regards,

:-) will
From: Tim Bradshaw
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <ey3so6tdbgg.fsf@lostwithiel.tfeb.org>
* William Deakin wrote:
> I accept the power of the macro, but lisp isn't the only language
> where you can write functional sub-routines. But, how (conceptually)
> is this different from writing a library or package that you can use
> again? Apart from the obvious (like it is alot harder to write a
> package or library) that is.

I'm not sure if you are trying to argue that other languages can do
the things that Lisp macros can.  In case you are, then I think that's
wrong, and it's the point that I was trying to make.  Other languages
(most other languages) let you design your own libraries of functions
and cook your own classes, not many other languages let you design
your own control constructs, and without that facility you really are
stuck in design-pattern land.

--tim
From: Quentin Deakin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7mdfbs$ksc$1@epos.tesco.net>
Tim Bradshaw wrote:

> I'm not sure if you are trying to argue that other languages can do the
things that Lisp macros can. In case you are, then I think that's wrong, and
it's the point that I was trying to make.

You are correct, I am not trying to argue that other languages can do the
things that Lisp macros can do. There are things that you can do, in perl
say, that can do some of the things that are like the things that macros can
do. Hmmm. Sorry about that sentence. But I think it is the point I would
like to make.

> Other languages (most other languages) let you design your own libraries
of functions and cook your own classes, not many other languages let you
design your own control constructs, and without that facility you really are
stuck in design-pattern land.

This last bit fascinates me. What do you mean by a control-constructs? Using
perl as an example is this when you bind data to functions so that if
something changes that variable a routine, or routines, are called and can
control the way in which that variable is altered? Or something else?

Best regards,

:-) Will

If this is garbled I sorry but i posted it from my PC at home, using
microtosh.
From: Tim Bradshaw
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <ey3pv1xcsdv.fsf@lostwithiel.tfeb.org>
* Quentin Deakin wrote:

> This last bit fascinates me. What do you mean by a control-constructs? Using
> perl as an example is this when you bind data to functions so that if
> something changes that variable a routine, or routines, are called and can
> control the way in which that variable is altered? Or something else?

A trivial case of what I mean is that you can write COND as a macro in
terms of other things in Lisp.  Or CASE in terms of COND or IF, or ...
And you can go as far as you want along these lines.  For a language
like (say) C, you are stuck with the control constructs you are
given: if you don't like the fall-through behaviour that you get with
switch, you can't invent your own (in general.  You can probably do
some preprocessor hack that will work some of the time, but you can't
do it right...

--tim
From: Christopher B. Browne
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <slrn7ol9ce.vk5.cbbrowne@knuth.brownes.org>
On 13 Jul 1999 01:20:28 +0100, Tim Bradshaw <···@tfeb.org> posted:
>* Quentin Deakin wrote:
>
>> This last bit fascinates me. What do you mean by a control-constructs? Using
>> perl as an example is this when you bind data to functions so that if
>> something changes that variable a routine, or routines, are called and can
>> control the way in which that variable is altered? Or something else?
>
>A trivial case of what I mean is that you can write COND as a macro in
>terms of other things in Lisp.  Or CASE in terms of COND or IF, or ...
>And you can go as far as you want along these lines.  For a language
>like (say) C, you are stuck with the control constructs you are
>given: if you don't like the fall-through behaviour that you get with
>switch, you can't invent your own (in general.  You can probably do
>some preprocessor hack that will work some of the time, but you can't
>do it right...

The ability to generate your own control structures is either critical
or "merely of academic interest."

CL includes many of the control structure forms/idioms that have proven
useful over the years, which is the "academic interest" part.  COND/CASE
are classic examples of control structures that can be created using
macros.  Which is well and good, but when every CL *already has* COND/CASE,
there is no need for the user to have macros in order to merely recreate
what the language already supports.

However.  The valuable part is that if you should have some structures
that keep replicating a common pattern, that pattern may be turned into
a new macro.

"Pseudo-for-instance," if a particular transaction management protocol
requires setting some variables, doing work, and then closing out the
transaction, it might be a slick idea to create a macro that does the
transaction management, leaving you free to write simpler code to do
the work of the transaction.

Or (and this isn't a *real* good example, as it is readily done using
a procedure), a macro might be constructed to
	(loop-over-lines filename procedure-to-do-for-each-line)
where the macro would rewrite this into an efficient loop structure
that does error-handling so as to guarantee that the file *will* get
closed at the end.
-- 
"If the future navigation system [for interactive networked services on
the NII] looks like something from Microsoft, it will never work."
-- Chairman of Walt Disney Television & Telecommunications
········@hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lwzp110yo5.fsf@copernico.parades.rm.cnr.it>
········@news.brownes.org (Christopher B. Browne) writes:

	...

> However.  The valuable part is that if you should have some structures
> that keep replicating a common pattern, that pattern may be turned into
> a new macro.
> 
> "Pseudo-for-instance," if a particular transaction management protocol
> requires setting some variables, doing work, and then closing out the
> transaction, it might be a slick idea to create a macro that does the
> transaction management, leaving you free to write simpler code to do
> the work of the transaction.
> 
> Or (and this isn't a *real* good example, as it is readily done using
> a procedure), a macro might be constructed to
> 	(loop-over-lines filename procedure-to-do-for-each-line)
> where the macro would rewrite this into an efficient loop structure
> that does error-handling so as to guarantee that the file *will* get
> closed at the end.

As a matter of fact I have some first hand experience in some
translation work I am doing.

I needed to produce neatly indented code for a language (Esterel)
which is much more readable when properly formatted.  Since I am
working in Java (I know, I know...) I ended up rewriting the CL Pretty
Printer in that language.  It was easy to hook it up (just define a
PrettyPrintWriter class) in the JDK, but there is no way to set up
something like PPRINT-LOGICAL-BLOCK.

You can only guess how much time I spent chasing down missing
'closeLogicalBlock's in my code. :{

This is akin to chasing down memory leaks (albeit a little less
worrysome).  It wastes programmers' time and so on...
Being able to extend the language in a sensible way saves the day in
many an occasion.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: William Deakin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <3789A89E.5A7C4693@pindar.com>
Tim Bradshaw wrote:

> ...a lot of the currently-fashionable design-patterns stuff...

Just like pedal-pushers, afro-wigs and the spice girls (ie. I think it is but only
in certain quarters)

> is really a way of working around the fact that the implementation language does
> not provide the concepts you need directly.  In a language which provides these
> ideas directly, or which can be build up to do so, you tend not to resort to
> patterns & idioms but to construct what you need directly in the language.

The point I was trying to make was about perl. Since the first posting was rating
perl in the C/C++/tcl camp. perl is a lower level language but one that will allow
the quick building up of concepts and abstractions. No, perl is not as good as lisp
for this. Yes, it is better than alot of other languages.

> Patterns may appear for a time in Lisp, but the language absorbs them into itself.

This I find very interesting. Say I wrote code that implemented all the patterns in
the GoF as a C++ template library, phoned up the ANSI committee and convinced them
that this was an important inclusion to the C++ standard and they agreed. [this
could of course be the start of a new language C+=2 :-) but I digress]. Having got
it included, C++ would also have absorbed patterns. Is this the kind of absorption
you are talking about? is this absorption in the perl/CPAN tradition: you have a
core distribution and a load of interesting stuff you can download if you need it?
Or is this something else?

Cheers,

Will
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <87yagmggtu.fsf@orion.dent.isdn.cs.tu-berlin.de>
William Deakin <·····@pindar.com> writes:

> > Patterns and idioms can be very powerful, but at a cost: They are
> > essentially write-time[1] mechanisms, so they cost programmer think
> > time.  There is also (at present) little support for Patterns from
> > the usual programming environments or languages.  Patterns must be
> > debugged/verified at every instantiation, and are therefore more
> > error-prone than ready-made constructs, etc.
> 
> I think this is an excellent point. But, if you have a difficult
> problem to solve, doesn't using a frame work help you to solve the
> problem at hand? This then saves time and thinking. There are also
> questions of maintainance: I have found code written with patterns
> tends to be clearer because people have spent time thinking and not
> banging away at the keyboard like the preverbial infinite number of
> monkies.

To me, high-level patterns are like algorithms.  They are great, if I
have to implement something from scratch.  However I prefer to build and
use libraries that encapsulate them once, and then use them all over the
place.  Take MD5 for example: you don't write a new implementation of
that every time you need to take an MD5 hash of something.  You use a
library (either self-written, or even better, ready-made) and call a
function.  That's because the language of your choice gives you this
fine mechanism of abstraction, the function.

> Maybe this is just me but the way I code is not to try and start
> from scratch but to beg, borrow or steal something else that does
> the job similar to what you want to do and then adjust it. However,
> I do not have experience of large scale CL development and the
> methods used may be very different.

Well CL development from a high-level perspective isn't that different 
to development in other languages.  Of course you try to re-use as
much as possible.  OTOH, like in every language, it may sometimes be
disadvantageous to re-use a certain piece of code:  It might not fit
in very well with your design, or might be buggy, or might not be
extensible enough for your needs.  In CL maybe you come across this
problem more often, because many C libraries are not written in a
non-C friendly way.  But on the whole, there is no real difference
between developing a C solution, a Perl solution or a Lisp solution
(besides the fact, that CL is more fun of course, and makes the
hard easy and the impossible tractable ;).

> > The extension mechanisms in programming languages don't suffer
> > from those problems.  If I write a macro, I only have to debug it
> > once, and the user doesn't have to think about the inner workings
> > of that macro.
> 
> I accept the power of the macro, but lisp isn't the only language
> where you can write functional sub-routines. But, how (conceptually)

Ah, but Common Lisp macros (together with higher-order functions)
are at a different level of abstraction than functions.  Macros let
you create new control-structure, which most other languages don't let 
you do (functional languages often do this solely via higher-order
functions (the monad "pattern"), but I find this a bit less clear
in many circumstances, though I do love monad-based parsing).

> is this different from writing a library or package that you can use
> again? Apart from the obvious (like it is alot harder to write a
> package or library) that is.

I wouldn't say it is a lot harder to write a package or library.
Macro writing can be quite hard (to get right).  The difference lies
elsewhere.  Let me snip this from your reply to Tim Bradshaw:

> Tim Bradshaw wrote:
> 
> > Patterns may appear for a time in Lisp, but the language absorbs
> > them into itself.
> 
> This I find very interesting. Say I wrote code that implemented all
> the patterns in the GoF as a C++ template library, phoned up the ANSI
> committee and convinced them that this was an important inclusion to
> the C++ standard and they agreed. [this could of course be the start
> of a new language C+=2 :-) but I digress]. Having got it included,
> C++ would also have absorbed patterns. Is this the kind of absorption
> you are talking about? is this absorption in the perl/CPAN tradition:
> you have a core distribution and a load of interesting stuff you can
> download if you need it?  Or is this something else?

Yes that might be the kind of absorption we are talking about (it
doesn't have to get included in any standard, for that matter.  Often
the interesting patterns are only instantiable in a domain-specific
way, so you just use them in your domain).

But the important point is: You can't implement a C++ template library
that obviates the need for (most of) the GoF patterns.  Take for example
the Visitor pattern: In effect, you would want multiple dispatch (and
generic-function centric OOP for that matter) to obviate the need for
that pattern.  This is just not possible in C++.  If templates would
have been able to solve all the problems treated in GoF, then the
authors would just have written a template library, and been done with 
it (remember that they always mention when something could be
implemented via templates, and show you the restrictions that such an
implementation would bring about).

OTOH, it can be argued that you can implement the core concept behind the
visitor pattern directly in (basic) Common Lisp, and that this has in fact
been done many years ago: CLOS' multiple-dispatch and generic-function
centric OOP.  In fact OO-programming is just a huge pattern, and can be
done in C/Fortran/whatever.  But many found it (rightfully) better to
extend their languages to provide better direct support for
OO-programming, thereby eliminating the need for users to implement
the OO-pattern themselves.  The interesting point here is, that where
Bjarne Stroustrup had to invent a new language (and a new compiler,
etc.) called C++ to do it, in the Lisp camp, OO subsystems were
implemented/embedded directly in CL, without ever writing a new
compiler, or a new language.  CLOS is just one of the OO subsystems
invented for CL.[1]

So I can argue pretty easily, that CL is able to often directly implement
the concepts behind many low-level patterns once, and then let users get
on with it, whereas most other languages have to extend the language
itself to do this.  This might explain their reluctance to extend the
language in certain ways, because it is easier to let users implement
patterns all over the place than extend standardized languages.  Since CL
doesn't need to extend the language (at the implementors level), we don't
need low-level patterns quite so often.

For another example take non-deterministic programming:  Either you
rely on patterns to implement it's concepts (like back-tracking,
etc.), or you invent a new programming language centered around those
concepts.  Or you write Screamer, which extends CL (within the
implementation, not by changing the implementation, of course) for
non-deterministic programming.  From the FAQ:

  Screamer is an extension of Common Lisp that adds support for
  nondeterministic programming.  Screamer consists of two levels.  The
  basic nondeterministic level adds support for backtracking and
  undoable side effects.  On top of this nondeterministic substrate,
  Screamer provides a comprehensive constraint programming language in
  which one can formulate and solve mixed systems of numeric and
  symbolic constraints.  Together, these two levels augment Common Lisp
  with practically all of the functionality of both Prolog and
  constraint logic programming languages such as CHiP and CLP(R).
  Furthermore, Screamer is fully integrated with Common Lisp. Screamer
  programs can coexist and interoperate with other extensions to Common
  Lisp such as CLOS, CLIM and Iterate.

  [...]

  Screamer was written by Jeffrey Mark Siskind and David Allen
  McAllester.  It is available by anonymous FTP from either
  ftp.ai.mit.edu, ftp.cis.upenn.edu, or ftp.cs.toronto.edu as the file
  /pub/qobi/screamer.tar.Z. Contact Jeffrey Mark Siskind
  (····@CS.Toronto.EDU) for further information.

> > ...I feel that patterns work best at the higher levels of
> > implementation and design, where programming languages can't
> > reach...
> 
> Being flippant, doesn't this sounds like a commercial for a type of Danish
> larger. ;-)

If slogans were software, I'd patent the slogan, and sue the producer
of a Danish lager for infringement ;)

Regs, Pierre.

Footnotes: 
[1]  It just so happens, that CLOS was the result of standardization
efforts to unify the OO subsystems that were existant at the time.
PCL is a portable implementation of CLOS for old CL's that don't have
CLOS built-in.  Nowadays, many compilers and implementations do have
some low-level support structure for CLOS built-in, but that is really 
only a performance-hack (or to better support CLOS high-level concepts 
in their environments).

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Tim Bradshaw
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <nkjk8s5q4xf.fsf@tfeb.org>
····@acm.org (Pierre R. Mai) writes:


> So I can argue pretty easily, that CL is able to often directly implement
> the concepts behind many low-level patterns once, and then let users get
> on with it, whereas most other languages have to extend the language
> itself to do this.  This might explain their reluctance to extend the
> language in certain ways, because it is easier to let users implement
> patterns all over the place than extend standardized languages.  Since CL
> doesn't need to extend the language (at the implementors level), we don't
> need low-level patterns quite so often.

I think another way of putting this, which sounds like a contradiction
but isn't, is that Lisp people extend the language to do something,
while people who use other languages typically do not do this.

The point is that Lisp is a language which is designed (or perhaps
evolved) for painless language extension, so people using Lisp extend
and change the language as a natural part of their everyday
programming: it is so easy to do that after a while you don't notice
yourself even doing it, but at some point you find you are writing in
a language which is not Lisp any more but some problem-specific
language which was once Lisp (and may subsume Lisp).

Lisp has a number of features that make this possible, the most
obvious being that you can reason about Lisp source in Lisp itself
`for free' (and if you extend Lisp, you can reason about your extended
language in itself).  A more subtle reason, I think, is that Lisp has
chosen this apparently rather rudimentary source representation of
basically nested lists rather than some more clever representation.  A
cleverer representation might make basic programming easier but would
at the same time commit you to much more and thus make extensions
harder to write.

In (most) other languages there are various walls in there which mean
that beyond a certain point you can't extend the language without
making a large one-off commitment in time to something like a
preprocessor or even a complete reimplementation, involving learning
stuff like yacc & lex which is enough pain that almost no one does it.
So you get these great convulsive twitches in these languages, where
someone eventually gets so fed up that they implement a completely new
language, which then sticks for many years until the next person who
gets that fed up.  

In between these twitches you get all these patterns appearing, many
of which are really ways writing programs in a language which has not
yet been designed.  A very good example of this use of patterns is the
Xt part of X11, which is quite clearly an object-oriented framework,
but implemented in C rather than C++.

--tim
From: Quentin Deakin
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7mdffa$ktm$1@epos.tesco.net>
Tim Bradshaw wrote:
> I think, is that Lisp has chosen this apparently rather rudimentary source
representation of basically nested lists rather than some more clever
representation.  A cleverer representation might make basic programming
easier but would at the same time commit you to much more and thus make
extensions harder to write.

and also

> In (most) other languages there are various walls in there which mean that
beyond a certain point you can't extend the language without making a large
one-off commitment in time to something like a  preprocessor or even a
complete reimplementation, involving learning  stuff like yacc & lex which
is enough pain that almost no one does it.

This is so right! I have been trying to write an interactive perl thing (ok
ok Lisps got one so why bother and yes I agree but it seemed like a good
thing to do at the time :-| ....) And the most difficult thing there is to
work out where you are in a code block without reparsing the whole block, at
which point you have rewritten the perl interpreter, but just more badly
(there is no way in this earth that I am as good a hacker as el Wall,
although I also tend not to quote the Lord of The Rings in my code also).

I've also wrestled writing a little C interpreter, because I find the
biggest problems I have with C is that I type the variable name in wrong or
something like not passing the correct arguments or calling malloc correctly
and then 5-10 minutes later after compiling and linking and when the whole
thing falls about your ears because of this you just have to curse. Loudly.
So I am no unaware of the yacc (or bison) and (f)lex beasts. I think there
was a reason for putting the very strange animals on the cover of the
OEeilly handbook.

> So you get these great convulsive twitches in these languages, where
someone eventually gets so fed up that they implement a completely new
language, which then sticks for many years until the next person who gets
that fed up.

And with this I could not agree with you more. The pschology of new
languages authors in one sentence.

Best regards,

:-) will
From: Vassil Nikolov
Subject: ...-time (Ex: Re: Why is tcl broken?)
Date: 
Message-ID: <l03130300b3b05f558062@195.138.129.106>
On 1999-07-12 03:58 +0200,
Pierre R. Mai wrote:

  [...]
  > write-time[1] mechanisms
  [...]
  > Footnotes: 
  > [1]  This is the time the program is written (or read/modified by the
  > programmer for that matter).  Given the different times of evaluation 
  > present in (Common) Lisp, the order would be write-time, read-time,
  > compile-time, run-time, ...

I quite agree that this is a valid and useful notion.  I believe, however,
that `write-time' is not the best choice of a term (I have used it myself,
but now I have second thoughts).  My doubts come from the fact that
`write-time' is ambiguous and may also be taken to mean the time when
the results of a program are written to its output.

To avoid that ambiguity, the time the program is written by the programmer
could be called `compose-time' (would this term also work well in the
case of one program producing another, i.e. does `compose-time' subsume
`generate-time' terminologically?).  There is also `design-time' before
`compose-time.'

To borrow from the read-eval-print loop, and again to avoid ambiguity,
the time when the program writes its results could be called `print-time.'

Thus we have in the full model:
  design-time
  compose-time (generate-time?)
  read-time
  compile-time
  load-time (let's not forget this either)
  run-time
  print-time


Vassil Nikolov
Permanent forwarding e-mail: ········@poboxes.com
For more: http://www.poboxes.com/vnikolov
  Abaci lignei --- programmatici ferrei.
From: Stig Hemmer
Subject: Re: ...-time (Ex: Re: Why is tcl broken?)
Date: 
Message-ID: <ekvwvvt6gp5.fsf@epoksy.pvv.ntnu.no>
Vassil Nikolov <········@poboxes.com> writes:
>   design-time
>   compose-time (generate-time?)
>   read-time
>   compile-time
>   load-time (let's not forget this either)
>   run-time
>   print-time

lunch-time

Stig Hemmer,
Jack of a Few Trades.
From: William Tanksley
Subject: Re: ...-time (Ex: Re: Why is tcl broken?)
Date: 
Message-ID: <slrn7pcv17.14p.wtanksle@dolphin.openprojects.net>
On 21 Jul 1999 19:44:22 +0200, Stig Hemmer wrote:
>Vassil Nikolov <········@poboxes.com> writes:
>>   design-time
>>   compose-time (generate-time?)
>>   read-time
>>   compile-time
>>   load-time (let's not forget this either)
>>   run-time
>>   print-time

>lunch-time

An illusion.  Doubly so.

>Stig Hemmer,

-- 
-William "Billy" Tanksley
From: Hume Smith
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <931550920.2035553121@news.glinx.com>
Paul Duffin <·······@mailserver.hursley.ibm.com> wrote:

>So how is Lisp or Forth (of which I only remember postfix notation)
>different to this ?
>
>	(setq code (fortran-expr "some fortran expression"))
>	(eval code)

it seems to me the Real Lisp Way is to use a hook in the reader, so you
can write
   (setq code #fortran{yadeyah})
and the reader knows when a } is or isn't a part of the Fortran
expression; you go completely outside Lisp lexemes and syntax.

--
<URL:http://www.glinx.com/~hclsmith/>
From: Raymond Toy
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <4n908pmqo6.fsf@rtp.ericsson.se>
>>>>> "Hume" == Hume Smith <········@glinx.deliver-me-from-evil.com> writes:

    Hume> it seems to me the Real Lisp Way is to use a hook in the reader, so you
    Hume> can write
    Hume>    (setq code #fortran{yadeyah})
    Hume> and the reader knows when a } is or isn't a part of the Fortran
    Hume> expression; you go completely outside Lisp lexemes and syntax.

I think that's pretty hard to do in general because parsing Fortran
(77) is pretty hard.  Simple Fortran expressions are probably ok, but
handling more of Fortran is hard.  And if all you're doing is a subset 
of Fortran, the infix library does this already.

Like 

	do 937 k = 1,5
	...
    937 continue

is quite different from

	do 937 k = 1.5
	...
    937 continue

(Hint:  , vs ..  The first line is the beginning of a do loop, the
second sets the variable do937k to 1.5)

And a bunch of other fun things that Fortran 77 let's you do. 

I don't know if Fortran 90/95/9x has changed these rules or not.  I
don't DO Fortran unless necessary.

Ray
From: Kent M Pitman
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <sfw908pmjhi.fsf@world.std.com>
[ replying to comp.lang.lisp only
  http://world.std.com/~pitman/pfaq/cross-posting.html ]

Hume Smith <········@glinx.deliver-me-from-evil.com> writes:

> it seems to me the Real Lisp Way is to use a hook in the reader, so you
> can write
>    (setq code #fortran{yadeyah})
> and the reader knows when a } is or isn't a part of the Fortran
> expression; you go completely outside Lisp lexemes and syntax.

Assuming you don't want to just enable a readtable that you put equally
on every character which kickstarts a fortran parser... :-)

I wrote a fortran->lisp translator as what was going to be my undergrad
thesis years ago (before I became a philosophy major and ended up not
needing to write a thesis).  I vaguely recall that just for grins we set
up such a readtable to see if it would work; also, I think CGOL (an infix
syntax for Maclisp, developed by Vaughan Pratt) did likewise...
From: William Tanksley
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <slrn7oeu72.223.wtanksle@dolphin.openprojects.net>
On Fri, 09 Jul 1999 09:58:00 +0100, Paul Duffin wrote:
>William Tanksley wrote:
>> On Wed, 07 Jul 1999 11:47:56 +0100, Paul Duffin wrote:
>> >William Tanksley wrote:

>I do not know what the reader is but it sounds as though you have to
>create your own parser which converts from whatever language to Lisp.

Of course.  How else would you expect to parse a different syntax?  Magic?

The point is that Lisp and Forth have hooks to do this trivially.  Tcl
doesn't.

>> >Can you make Lisp look like Tcl ?

>> Yes.  Or C, or Forth.  But you wouldn't want to -- far too much effort.

>> OTOH, I do know of a FORTRAN expression parser in Forth.  Very useful; it
>> simply outputs the Forth equivalent of a FORTRAN formula.  When you're
>> doing scientific calculations you want to keep things as close as possible
>> to the original document.

>> A similar effort in Tcl would be possible, but the user would have to
>> carefully avoid Tcl's special characters, and Tcl would not be able to
>> bytecode the result without changes to the Tcl compiler itself.  (Unless,
>> as is entirely possible, the Tcl optimizer has grown more sophisticated.)

>Assume I have a Tcl command "fortran-expr" which takes a fortran expr
>and converts it into Tcl code.

>	set code [fortran-expr {some fortran expression}]
>	eval $code

>Then of course the fortran expression would have to obey Tcl's rules
>about what can be inside {}.

Of course.

>So how is Lisp or Forth (of which I only remember postfix notation)
>different to this ?
>	(setq code (fortran-expr "some fortran expression"))
>	(eval code)

Err...  That's a pretty sick way of doing it.

>The fortran expression has to obey Lisp's rules about what can be in
>a string which are much more stringent than Tcl's {} as you cannot
>even nest strings.

(#fortran-expr{some fortran expression})

(I chose curly braces because Fortran doesn't use them at all; for C I
would use something else, or perhaps I would just rely on people writing
complete C programs with no curly braces out of place.)

In Forth, you can do anything you want, because there is no language paser
which looks further ahead than the current word.  A word named Fortran{
would be sufficient to imitate the above.

But I suspect that even Lisp would have trouble parsing Python.  I seem to
recall the reading accepting tokens, and Python relies on a token pair that
Lisp doesn't: INDENT and DEDENT.

>> >> >> Reminds me of the people who keep assuming that Python's indentation is
>> >> >> some kind of bug :-).

>> >> >Or that Tcl's comment is a bug.

>> >> Exactly.  It's the result of a deliberate design decision, with benefits
>> >> and drawbacks of its own.  We want to study the design decision, not a
>> >> single side effect.

>> >> Only accusing Python indentation as being a bug is even sillier than
>> >> accusing Tcl comments -- the comments are a side effect, and could be
>> >> hacked away at some expense.  The indentation is a fundamental part of the
>> >> design of the language, and it's why I like Python so much, in spite of
>> >> all the languages (such as Lisp) with much better OO, scoping, and so on.

>> >So what you are saying is the whitespace in Python was an explicit design
>> >decision whereas the comment behaviour in Tcl is a side effect.

>> Sort of.  Both are on the same scale: they're useful and deliberate design
>> decisions.  Whitespace in Python is more deliberate than comments in Tcl:
>> indentation is a fundamental part of Python's definition.  Comments which

>Whitespaces are a fundamental part of Python's design but they add no
>functional benefit, only syntactic benefit which as we all know is
>subjective. It is equally possible to have a Python which uses {} as
>block delimiters.

...and it's equally possible to have a Python which looked and acted
exactly like Tcl.  Whitespace in Python provides a _fundamental_
distinction from other languages, in the same way that the complete
parsers in Lisp provide a fundamental distinction from the ad-hoc parsers
in BASIC or FORTRAN.

>> don't always comment are not fundamental to Tcl, although the _reasons_
>> they do not comment are fundamental.

>Once the decision was made that all Tcl commands are to be processed the
>same and that it was up to the individual command how it uses its
>arguments the behaviour of comments was defined.

It is also special that a comment is a command, but yes, this is what I
meant to say.

>Not breaking (or at least severely damaging with multiple exceptions) the above 
>rule in order to implement a 'proper' comment was a decision which may have
>been made explicitly or implicitly it does not really matter. The point is that
>Tcl with a 'proper' comment would be a completely different language.

Not really -- it would be Tcl with a single reserved character.

>Therefore I would say that the behaviour of the Tcl comment is more
>fundamental to Tcl than using whitespace for block definition is to Python
>simply because of the cost involved in making a different decision.

Tcl with a single reserved character would be source-incompatible with the
current Tcl, but it would look and write the same.  Python without INDENT
and DEDENT would look like not-Python, but act the same.

The behavior of commands is certainly fundamental to Tcl, but I think it
could have been made more complicated without harming the concept.  Not
that anyone would want more complication.

>I would say that the behaviour of the Tcl comment is a lot harder to
>understand than how Python uses whitespace.

Seems that way, from the relative number of people who challenge the
languages on that basis.

>Paul Duffin

-- 
-William "Billy" Tanksley
From: Pierre R. Mai
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <87oghk2s40.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@dolphin.openprojects.net (William Tanksley) writes:

> But I suspect that even Lisp would have trouble parsing Python.  I seem to
> recall the reading accepting tokens, and Python relies on a token pair that
> Lisp doesn't: INDENT and DEDENT.

A reader macro can choose to read data from the stream any way it
wants to.  It doesn't have to use the reader recursively to read in
tokens.  That means a reader macro that would switch to reading Python 
with indenting would be possible without problems, should one choose
to do so (remember that strings are in effect read by the reader macro 
for the character #\", and that certainly respects whitespace)...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Tim Bradshaw
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <nkjzp1hizvb.fsf@tfeb.org>
 Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> I am surprised at this because Tcl's [upvar] is simply an explicit form
> of dynamic binding which I seem to remember is what Lisp uses. (That
> is how the 'let' function can be (is) implemented as a lambda function).

No, (Common) Lisp doesn't use dynamic binding by default, and you
don't need anything like upvar to implement LET.  In fact LET is
really the most common case of a lisp design pattern -- `WITH-x ->
CALL-WITH-x:

	(let ((a b)) . body)
     -> ((lambda (a) . body) b)

(Writing the macro for this is a little fiddly if you want to get it
right in all cases, but not more than 10 minutes work.)

> I find that Lisp macros (while a very powerful and necessary mechanism)
> are sooo confusing. They are in essence another language inside Lisp,
> and as such introduce inconsistencies. Tcl on the other hand doesn't
> need a macro language and as such is much more consistent than Lisp.

But the language of Lisp macros is Lisp, that's really the whole
point!  Without knowing TCL, I find it hard to see how you can
introduce new constructs to the language *without* a macro language,
even if that language is TCL.

--tim
From: Fernando Mato Mira
Subject: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <377A3383.D453468@iname.com>
Tim Bradshaw wrote:

> But the language of Lisp macros is Lisp, that's really the whole
> point!  Without knowing TCL, I find it hard to see how you can
> introduce new constructs to the language *without* a macro language,
> even if that language is TCL.

Err.. What about Forth keywords?
From: William Tanksley
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <slrn7nkm9t.579.wtanksle@dolphin.openprojects.net>
On Wed, 30 Jun 1999 17:10:59 +0200, Fernando Mato Mira wrote:
>Tim Bradshaw wrote:

>> But the language of Lisp macros is Lisp, that's really the whole
>> point!  Without knowing TCL, I find it hard to see how you can
>> introduce new constructs to the language *without* a macro language,
>> even if that language is TCL.

>Err.. What about Forth keywords?

Forth doesn't have keywords, so I'm guessing you mean immediate words.
They're written in the same language, but they tend to use words that
wouldn't be used otherwise (source access and such).  So they really are a
seperate language; after all, Forth is all about building new languages.

Another possible meaning of 'keyword' might be creating words.  Those are
cool, and don't require or generally use new vocabulary, but do require
grasping some pretty new concepts -- compile-time and runtime take on new
meanings.

-- 
-William "Billy" Tanksley
From: William H. Duquette
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <377aa228.31380423@news.jpl.nasa.gov>
On Wed, 30 Jun 1999 17:10:59 +0200, Fernando Mato Mira
<········@iname.com> wrote:

>
>
>Tim Bradshaw wrote:
>
>> But the language of Lisp macros is Lisp, that's really the whole
>> point!  Without knowing TCL, I find it hard to see how you can
>> introduce new constructs to the language *without* a macro language,
>> even if that language is TCL.
>
>Err.. What about Forth keywords?

Tcl is a lot like Forth, too. :-)

A Tcl command is just a procedure that can process its white-space
delimited arguments however it likes, including evaluating them as Tcl
code. There's nothing in the language but Tcl commands; the standard
control structures like "if", "for", "foreach", and "while" can
be written in Tcl.  By using the same kinds of tricks, you can
add new constructs of whatever kind you like, *provided* that
your new constructs have standard Tcl syntax.

You can do more interesting things in Lisp with readmacros,
from what I'm told, including making it look like an entirely
different language.  Tcl code always looks like Tcl code.

Will

--------------------------------------------------------------------------
Will Duquette, JPL  | ··················@jpl.nasa.gov
But I speak only    | http://eis.jpl.nasa.gov/~will (JPL Use Only)
for myself.         | It's amazing what you can do with the right tools.
From: William Tanksley
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <slrn7nlej3.5j1.wtanksle@dolphin.openprojects.net>
On Wed, 30 Jun 1999 23:07:23 GMT, William H. Duquette wrote:
>On Wed, 30 Jun 1999 17:10:59 +0200, Fernando Mato Mira
>>Tim Bradshaw wrote:

>>> But the language of Lisp macros is Lisp, that's really the whole
>>> point!  Without knowing TCL, I find it hard to see how you can
>>> introduce new constructs to the language *without* a macro language,
>>> even if that language is TCL.

>>Err.. What about Forth keywords?

>Tcl is a lot like Forth, too. :-)

That's how it initially stuck me -- Tcl looks like it was, at least in
concept, designed by someone attempting to apply Forth's looseness to a
language with a parser and a syntax.  This is something I've always been
curious about, and Tcl satisfied that curiosity -- I don't think it's a
good mix.

>You can do more interesting things in Lisp with readmacros,
>from what I'm told, including making it look like an entirely
>different language.  Tcl code always looks like Tcl code.

Now Lisp is a worthwhile improvement on Forth.  ;-)

(Yes, I know -- the ancestry goes the other way around: Lisp -> Scheme ->
Forth.)

>Will

-- 
-William "Billy" Tanksley
From: Lars Marius Garshol
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <wkzp1g2t2e.fsf@ifi.uio.no>
* William Tanksley
| 
| (Yes, I know -- the ancestry goes the other way around: Lisp ->
| Scheme -> Forth.)

Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
quite unsure of whether Chuck Moore knew Lisp at all.

--Lars M.
From: William Tanksley
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <slrn7nnfs3.19k.wtanksle@dolphin.openprojects.net>
On 01 Jul 1999 19:11:53 +0200, Lars Marius Garshol wrote:
>* William Tanksley

[I made the joking claim that Lisp was an improvement on Forth, then to
defuse criticism I claimed:]
>| (Yes, I know -- the ancestry goes the other way around: Lisp ->
>| Scheme -> Forth.)

>Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
>quite unsure of whether Chuck Moore knew Lisp at all.

I don't think Forth is that old, certainly not in any usable form.  Here,
according to the history on www.forth.com, "the first program to be called
Forth was written in about 1970."  That substantiates what you're saying
about Scheme -- I'm a little suprised.  Okay, toss Scheme out of the
ancestry ;-).

Chuck Moore, the inventor of Forth, got a BA in physics from MIT and went
into grad school at Stanford.  He claims to have taken classes from and
learned a lot from Lisp.

Forth is very much like Lisp, with the subtraction of memory management
and the addition of implicit parameter passing.  Forth is Lisp for people
who prefer "as simple as possible" to "but no simpler."  (And it IS a very
good language; I use it more than Scheme.)

A paragraph like that is neccesarily an oversimplification.  Perhaps I can
get away with saying that Forth learned more from Lisp than it did from
any other language existing at the time?  Yes, I think that's tenable.

>--Lars M.

-- 
-William "Billy" Tanksley
From: Fernando Mato Mira
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <377BDFAC.EA5CD228@iname.com>
I think Lisp is the dual of Forth.
Is that why I always felt attracted to it (although I never really learned it
yet)?
Who came first doesn't matter then. Maybe we should add an 's' to "The Right
Thing"(TM).

Hey, I was wrong thinking Mind.Forth was sadly misplaced?

Uh.Uh. The `Dark' Side is calling me.. ;-)

[ObPython: Is anybody reading this only on the python _mailing list_? Unless
someone says so,
remove comp.lang.python]
From: Tim Bradshaw
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <ey3pv2ckt0q.fsf@lostwithiel.tfeb.org>
[Removed all but comp.lang.lisp]
* William Tanksley wrote:

> Forth is very much like Lisp, with the subtraction of memory management
> and the addition of implicit parameter passing.  Forth is Lisp for people
> who prefer "as simple as possible" to "but no simpler."  (And it IS a very
> good language; I use it more than Scheme.)

For what it's worth there are ongoing forth-lisp links in the form of
HP's language for their advanced calculators, RPL, which stands for
Reverse Polish Lisp, although it is really closer to Forth with memory
management.  It's actually a delightful language to use considering
the limitations of the device it runs on. It is a little frustrating
to those used to Lisp sometimes though.

(Of course, that doesn't stop people clamouring for a C++ compiler for
the hp48!).

--tim
From: Christopher Browne
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <9kgf3.301$F7.23516@news3.giganews.com>
On Thu, 01 Jul 1999 19:20:36 GMT, William Tanksley
<········@dolphin.openprojects.net> wrote:
>On 01 Jul 1999 19:11:53 +0200, Lars Marius Garshol wrote:
>>* William Tanksley
>
>[I made the joking claim that Lisp was an improvement on Forth, then to
>defuse criticism I claimed:]
>>| (Yes, I know -- the ancestry goes the other way around: Lisp ->
>>| Scheme -> Forth.)
>
>>Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
>>quite unsure of whether Chuck Moore knew Lisp at all.
>
>I don't think Forth is that old, certainly not in any usable form.  Here,
>according to the history on www.forth.com, "the first program to be called
>Forth was written in about 1970."  That substantiates what you're saying
>about Scheme -- I'm a little suprised.  Okay, toss Scheme out of the
>ancestry ;-).

Yes, please.  If there's relationship between Forth and Lisp, it does
not lie in the Scheme direction.

>Chuck Moore, the inventor of Forth, got a BA in physics from MIT and went
>into grad school at Stanford.  He claims to have taken classes from and
>learned a lot from Lisp.

I don't remember seeing any indication of Moore having encountered
Lisp; if so, that is very interesting.

>Forth is very much like Lisp, with the subtraction of memory management
>and the addition of implicit parameter passing.  Forth is Lisp for people
>who prefer "as simple as possible" to "but no simpler."  (And it IS a very
>good language; I use it more than Scheme.)
>
>A paragraph like that is neccesarily an oversimplification.  Perhaps I can
>get away with saying that Forth learned more from Lisp than it did from
>any other language existing at the time?  Yes, I think that's tenable.

Moore was (and seems still to be) *very* independently-minded; anyone
that remembers "BLOCK wars" or "FP stack wars" or "Is ANSI Forth
uselessly incompatible?" is probably also aware that Moore has been
sufficiently "extreme" as to be above and beyond all such "petty"
disagreements.

The fact that he actually designed some microprocessors with OK makes
the unbelievably sparse architecture of OK seem faintly believable.

(Or, putting it another way, if he *hadn't* done such massive things
as designing several Forth-related languages, founding companies, and
designing, building, and even *selling* families of microprocessors,
the rather radical things he says would probably be regarded as the
ravings of a Net.Kook.)

Forth has enough common features (e.g. - having a stack, which allows
recursion even in the simplest of words) with Lisp as to make an
association not unbelievable.

But the lack of GC, dynamic allocation, and the likes keep them
distant enough that you have to look closely to see the resemblance.
-- 
DOS: n., A small annoying boot virus that causes random spontaneous
system crashes, usually just before saving a massive project.  Easily
cured by UNIX.  See also MS-DOS, IBM-DOS, DR-DOS.  (from David Vicker's
.plan)
········@hex.net- <http://www.hex.net/~cbbrowne/langobscure.html>
From: wil blake
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <7lug3l$lra$1@news.campuscwix.net>
Christopher Browne wrote in message
<··················@news3.giganews.com>...

>Yes, please.  If there's relationship between Forth and Lisp, it does
>not lie in the Scheme direction.

>I don't remember seeing any indication of Moore having encountered
>Lisp; if so, that is very interesting.

In the Special Forth issue from Byte Magazine(August 1980?) an article ("The
Evolution of Forth, and Unusual Language"?) quoting Chuck Moore has a Forth
history that has one Forth code listing containing the word "atoms" and a
quote from Moore saying in effect "The Lisp influence is evident." I think
that the article may have originated from a Rochester Forth Conference
discussion.
-Wil Blake
From: Tim Bradshaw
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <ey3r9mskvja.fsf@lostwithiel.tfeb.org>
* Lars Marius Garshol wrote:

> Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
> quite unsure of whether Chuck Moore knew Lisp at all.

That's OK, call/cc is trivially causality-violating so scheme can
happily be an influence on languages older than it is.

--tim
From: Jeff Dalton
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <x27lojoums.fsf@todday.aiai.ed.ac.uk>
Tim Bradshaw <···@tfeb.org> writes:

> * Lars Marius Garshol wrote:
> 
> > Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
> > quite unsure of whether Chuck Moore knew Lisp at all.

Well *Lisp* dates back to the late 50s.  Scheme may be from 75,
but Lisp is a different matter.  And lambda calculus is, what,
1949?
From: Michael P. Reilly
Subject: Re: Language extensibility
Date: 
Message-ID: <gZ6f3.650$6M6.197790@news.shore.net>
In comp.lang.python Jeff Dalton <····@todday.aiai.ed.ac.uk> wrote:
: Tim Bradshaw <···@tfeb.org> writes:

:> * Lars Marius Garshol wrote:
:> 
:> > Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
:> > quite unsure of whether Chuck Moore knew Lisp at all.

: Well *Lisp* dates back to the late 50s.  Scheme may be from 75,
: but Lisp is a different matter.  And lambda calculus is, what,
: 1949?

Supposedly, Lisp was created in 1953, a year after Fortran, and is said
to be the second higher-level (non-assembly) language written.

But then... I wasn't around at that time, so what do I know. ;)

  -Arcege
From: Tim Bradshaw
Subject: Re: Language extensibility
Date: 
Message-ID: <ey3oghulrui.fsf@lostwithiel.tfeb.org>
* Michael P Reilly wrote:

> Supposedly, Lisp was created in 1953, a year after Fortran, and is said
> to be the second higher-level (non-assembly) language written.

No, I think it's 1958 -- the 40th anniversary conference was last
year, after all!

(A good year for programming languages and guitars both).

--tim
From: Christopher R. Barry
Subject: Re: Language extensibility
Date: 
Message-ID: <871zeqi161.fsf@2xtreme.net>
Tim Bradshaw <···@tfeb.org> writes:

> * Michael P Reilly wrote:
> 
> > Supposedly, Lisp was created in 1953, a year after Fortran, and is said
> > to be the second higher-level (non-assembly) language written.
> 
> No, I think it's 1958 -- the 40th anniversary conference was last
> year, after all!
> 
> (A good year for programming languages and guitars both).

Hmm... L-4 CES, Les Paul... it seems every classic Gibson was
made in 1958....

Christopher
From: Jerome Kalifa
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <wmcn1xenodh.fsf@polytechnique.fr>
Jeff Dalton <····@todday.aiai.ed.ac.uk> writes:
[...]
> 
> Well *Lisp* dates back to the late 50s.  Scheme may be from 75,
> but Lisp is a different matter.  And lambda calculus is, what,
> 1949?
> 

No, as a mathematical theory, it was developed before the war.
-- 
Jerome Kalifa
Centre de Mathematiques Appliquees,  Ecole Polytechnique.
91128 Palaiseau Cedex, France.  (33)169333981
From: Eugene Leitl
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <14206.52213.119553.809177@lrz.de>
Jerome Kalifa writes:

 > > Well *Lisp* dates back to the late 50s.  Scheme may be from 75,
 > > but Lisp is a different matter.  And lambda calculus is, what,
 > > 1949?
 > > 
 > 
 > No, as a mathematical theory, it was developed before the war.

Church, A., The Calculi of Lambda Conversion, Princeton University
Press, Princeton, N.J., 1941 
From: Lars Marius Garshol
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <wk3dz4u320.fsf@ifi.uio.no>
* Jeff Dalton
|
| Well *Lisp* dates back to the late 50s.  Scheme may be from 75, but
| Lisp is a different matter.  And lambda calculus is, what, 1949?

* Jerome Kalifa
| 
| No, as a mathematical theory, it was developed before the war.

* Eugene Leitl
| 
| Church, A., The Calculi of Lambda Conversion, Princeton University
| Press, Princeton, N.J., 1941

According to David Harel's Algorithmics, the correct reference is
actually S.C. Kleene "A Theory of Positive Integers in Formal Logic",
Amer. J. Math. 57 (1935), pp. 153-173,219-244.

And there seems to be general agreement that the lambda calculus dates
back to the 30s. 

--Lars M.
From: Eugene Leitl
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <14207.64080.35726.524863@lrz.de>
Lars Marius Garshol writes:
 
 > * Eugene Leitl
 > | 
 > | Church, A., The Calculi of Lambda Conversion, Princeton University
 > | Press, Princeton, N.J., 1941
 > 
 > According to David Harel's Algorithmics, the correct reference is
 > actually S.C. Kleene "A Theory of Positive Integers in Formal Logic",
 > Amer. J. Math. 57 (1935), pp. 153-173,219-244.

Wildly off-topic, but fun: http://www.cl.cam.ac.uk/users/mgh1001/limericks.html

One of the most prolific writers of Limericks was John von Neumann, 
who is said to have written thousands of them. One of my favorites 
of his augurs the development of VR ... 

     There was a young man called Kleene, 
     Who invented a ·@#&ing machine, 
         Concave, or convex, 
         It fit either sex, 
     And was remarkably easy to clean! 

     - John von Neumann 

Thanks to Jim Horning for this version, which may well be the more correct: 

     There was a young man called Kleene, 
     Who invented a ·@#&ing machine, 
         Concave and convex, 
         It could screw either sex, 
     And diddle itself in between! 

     - John von Neumann 
From: Eugene Leitl
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <14204.912.383867.589108@lrz.de>
Lars Marius Garshol writes:
 > 
 > Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
 > quite unsure of whether Chuck Moore knew Lisp at all.

Chuck is mostly doing (quirky, but fascinating) hardware design 
nowadays. Check out http://www.ultratechnology.com/ to what he has
been up to recently.
From: Christopher B. Browne
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <slrn7noclt.429.cbbrowne@knuth.brownes.org>
On Thu,  1 Jul 1999 17:12:06 -0700 (PDT), Eugene Leitl <············@lrz.uni-muenchen.de> posted:
>Lars Marius Garshol writes:
> > 
> > Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
> > quite unsure of whether Chuck Moore knew Lisp at all.
>
>Chuck is mostly doing (quirky, but fascinating) hardware design 
>nowadays. Check out http://www.ultratechnology.com/ to what he has
>been up to recently.

Anyone else that said the things Chuck Moore does would be considered
a "net-kook."  He's had some odd-ball stuff such as:
- Actually using a "chording" keyboard
- Eliminating source code (OK)
- A 20 bit microprocessor
- A conciously tail-recursive Forth
- Designing chips by directly manipulating arrays, rather than using
abstract languages.
- The claim that code should *not* be written to be reusable, but rather
should be tightened so as to "throw away everything that isn't being used."

If it weren't that he's actually gotten truly useful results out of
all of these things, he'd probably be considered "just another of the
opinionated fools" that clutter up the Internet.

Note that the final item on that list is really crucial; he minimizes
things to the max, and the rather bizarre results are indeed direct
results of that philosophy.
-- 
Those who do not understand Unix are condemned to reinvent it, poorly. 	
-- Henry Spencer          <http://www.hex.net/~cbbrowne/lsf.html>
········@hex.net - "What have you contributed to free software today?..."
From: Lieven Marchand
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <m3btdvhrlt.fsf@localhost.localdomain>
········@news.brownes.org (Christopher B. Browne) writes:

> If it weren't that he's actually gotten truly useful results out of
> all of these things, he'd probably be considered "just another of the
> opinionated fools" that clutter up the Internet.
> 
> Note that the final item on that list is really crucial; he minimizes
> things to the max, and the rather bizarre results are indeed direct
> results of that philosophy.

The difference between Moore and "just another of the opinionated
fools" is indeed that he gets results. I'm more into Common Lisp these
days but "Thinking Forth" is a book that has radically changed many of
my ideas about programming. 

It's also interesting to see that he resists most of the additions
that have been made to "Standard Forth" in the last decade and still
gets the work done. Perhaps Forth and Lisp are really duals in the
language space with Common Lisp and Moore's minimalist Forth on the
extreme ends. Forth and Lisp (with Basic) are also the languages with
a lot of variations and differences (perhaps they should be called
language families?).

-- 
Lieven Marchand <···@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker
From: David Thornley
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <d2Of3.1233$U5.237262@ptah.visi.com>
In article <··············@localhost.localdomain>,
Lieven Marchand  <···@bewoner.dma.be> wrote:
>········@news.brownes.org (Christopher B. Browne) writes:
>
>The difference between Moore and "just another of the opinionated
>fools" is indeed that he gets results. I'm more into Common Lisp these
>days but "Thinking Forth" is a book that has radically changed many of
>my ideas about programming. 
>
I tend to learn new ways of thinking with new languages.  I came up with
a really nice way of doing one particular thing in a C++ environment,
and I'm positive it was inspired by "Lispy" thinking.  (Now if I can
get my boss to buy in on it.....)

"Thinking Forth" is an excellent book to read.  So is Stroustrup's
"Design and Evolution of C++".  I haven't seen a similar book about
the Lisp way of thinking, not counting books specifically on learning
Lisp and doing it well.  Then again, I find Common Lisp revolutionary
enough sometimes.

>gets the work done. Perhaps Forth and Lisp are really duals in the
>language space with Common Lisp and Moore's minimalist Forth on the
>extreme ends. Forth and Lisp (with Basic) are also the languages with

Lisp and Forth are surprisingly similar in some ways, and way different
in others.

--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Cameron Laird
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <7lof8c$o9o$1@Starbase.NeoSoft.COM>
In article <····················@ptah.visi.com>,
David Thornley <········@visi.com> wrote:
			.
			.
			.
>"Thinking Forth" is an excellent book to read.  So is Stroustrup's
>"Design and Evolution of C++".  I haven't seen a similar book about
>the Lisp way of thinking, not counting books specifically on learning
>Lisp and doing it well.  Then again, I find Common Lisp revolutionary
			.
			.
			.
Not *SICP*?
-- 

Cameron Laird           http://starbase.neosoft.com/~claird/home.html
······@NeoSoft.com      +1 281 996 8546 FAX
From: David Thornley
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <F1Tf3.1270$U5.242370@ptah.visi.com>
In article <·················@nntp.ix.netcom.com>,
Dennis Lee Bieber <·······@ix.netcom.com> wrote:
>On Sun, 04 Jul 1999 18:50:49 GMT, ········@visi.com (David Thornley)
>declaimed the following in comp.lang.python:
>
>
>> "Thinking Forth" is an excellent book to read.  So is Stroustrup's
>> "Design and Evolution of C++".  I haven't seen a similar book about
>> the Lisp way of thinking, not counting books specifically on learning
>> Lisp and doing it well.  Then again, I find Common Lisp revolutionary
>> enough sometimes.
>>
>	Unfortunately, it predates much of what is now considered to be
>LISP.
>
>	"Anatomy of LISP", John Allen, (1978 McGraw-Hill Computer
>Science Series).
>
>	While using LISP, it was started as a data structures type
>course...
>
I am unfamiliar with that.  I suppose I could track it down.

As for the suggestion of SICP, all I can do is plead a temporary
memory failure.  I should have thought of that unprompted.


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Michael Vanier
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <jln1x6wqr3.fsf@arathorn.bbb.caltech.edu>
········@visi.com (David Thornley) writes:
> In article <··············@localhost.localdomain>,
> Lieven Marchand  <···@bewoner.dma.be> wrote:
> >········@news.brownes.org (Christopher B. Browne) writes:
> >
> >The difference between Moore and "just another of the opinionated
> >fools" is indeed that he gets results. I'm more into Common Lisp these
> >days but "Thinking Forth" is a book that has radically changed many of
> >my ideas about programming. 
> >
> I tend to learn new ways of thinking with new languages.  I came up with
> a really nice way of doing one particular thing in a C++ environment,
> and I'm positive it was inspired by "Lispy" thinking.  (Now if I can
> get my boss to buy in on it.....)
> 
> "Thinking Forth" is an excellent book to read.  So is Stroustrup's
> "Design and Evolution of C++".  I haven't seen a similar book about
> the Lisp way of thinking, not counting books specifically on learning
> Lisp and doing it well.  Then again, I find Common Lisp revolutionary
> enough sometimes.

IMHO the best book of this sort for Lisp is "On Lisp" by Paul Graham (and to a
lesser extent Graham's "ANSI Common Lisp").  After reading those books, you'll
never want to live without (Lisp) macros again ;-)  SICP is a *great* book,
but it's more about the Big Ideas In Computing than about Lisp/Scheme per se.
Graham's books describe how Lisp features (especially macros) allow you to
work at a higher level of abstraction than is possible in other languages, and
may evoke a religious transformation in susceptible readers ;-)

But-are-prefix-syntaxes-*really*-superior-to-infix-ones?-
You-be-the-judge-ly y'rs, Mike



-------------------------------------------------------------------------
Mike Vanier	·······@bbb.caltech.edu
Department of Computation and Neural Systems, Caltech 216-76
Will optimize nonlinear functions with complex parameter spaces for food.
From: Fernando Mato Mira
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <377C732D.3909170F@iname.com>
That SIGPLAN column is truly enticing. Java has one, too. Hello?
From: William Tanksley
Subject: Re: Language extensibility (was: Why is tcl broken?)
Date: 
Message-ID: <slrn7nqm4f.3df.wtanksle@dolphin.openprojects.net>
On Fri, 02 Jul 1999 03:33:03 GMT, Christopher B. Browne wrote:
>On Thu,  1 Jul 1999 17:12:06 -0700 (PDT), Eugene Leitl <············@lrz.uni-muenchen.de> posted:
>>Lars Marius Garshol writes:

>> > Huh? Forth dates back to the 60s, whereas Scheme is from 1975, and I'm
>> > quite unsure of whether Chuck Moore knew Lisp at all.

>>Chuck is mostly doing (quirky, but fascinating) hardware design 
>>nowadays. Check out http://www.ultratechnology.com/ to what he has
>>been up to recently.

>Anyone else that said the things Chuck Moore does would be considered
>a "net-kook."  He's had some odd-ball stuff such as:

Chuck would be a kook if he ever argued about what he does -- we need more
people like him, who are willing to explore without being driven to argue
with people about it.

>- Actually using a "chording" keyboard
>- Eliminating source code (OK)

He now thinks both were a bad idea -- a failed experiment.

>- A 20 bit microprocessor

This was driven by economics, not experiment -- he prefers 32 (or 33) bits
in real life ;-).  I think that given the choice, he'd make it 33 bits
internal and 32 bits external.

The extra bit, for those of you not following Chuck's work, provides a
carry bit for every number on the stack.  A very cool way to write
smaller, simpler code.

>- A conciously tail-recursive Forth

One of his newer things...  Time will tell.  It allows for writing code
without loops, but it's an implicit and required optimization, which is
not something Forth usually likes.

>- Designing chips by directly manipulating arrays, rather than using
>abstract languages.

To the best of my knowledge he never did this.  He wrote a VLSI CAD
program from scratch (without source code, see OK) so that he could design
his processor by drawing lines and connecting them.

>- The claim that code should *not* be written to be reusable, but rather
>should be tightened so as to "throw away everything that isn't being used."

Or more accurately, the design should not include things that aren't
required (and the requirements should be trimmed to include only what's
known to be needed right now).  You don't throw things out of working code
just for the heck of it, you know ;-).

>If it weren't that he's actually gotten truly useful results out of
>all of these things, he'd probably be considered "just another of the
>opinionated fools" that clutter up the Internet.

And most of the opinionated fools (ahem, like yours truly) don't have half
of the exploratory genius that he does.  Even if he does fail sometimes.

>Note that the final item on that list is really crucial; he minimizes
>things to the max, and the rather bizarre results are indeed direct
>results of that philosophy.

Yes, simple is beautiful.  Simple is also maintainable.  I think everyone
in these language groups has to agree with that; our languages are
different, but they're all based upon some conception of simplicity.

If I were more prone to cross-posting I'd toss this over to the Forth NG,
but we don't really need that :-).

>········@hex.net - "What have you contributed to free software today?..."

I maintain Omega -- only LGPL, but it's only a game.

-- 
-William "Billy" Tanksley
From: Tom Breton
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <m36744kjrm.fsf@world.std.com>
Paul Duffin <·······@mailserver.hursley.ibm.com> writes:

> Please enlighten me as to what my misconception is. I am obviously not
> a Lisp expert and I realise that there are plenty of different flavours
> around but the above code works inside Emacs which says that it is
> based on Common Lisp.
[...]
> > 
> > >         (defun inner (b)
> > >           (+ a b))
> > >
> > >         (defun outer (a)
> > >           (inner 3))
> > >
> > >         (outer 2)

In elisp, all variables are "special", which is to say, dynamically
scoped.  Thus inner can find "a" even tho "a" is declared in outer.
This is not so in Common Lisp.

> > > A simple macro version of setq would convert from
> > >         (setq symbol '(list))
> > > to
> > >         (set 'symbol '(list))
[...]
> > > A macro setq is not interpreted the same way as set is because if it
> > > was an error would occur when the Lisp interpreter tried to get the
> > > value of the variable symbol before symbol was created.

What seems to be the problem?  An error in the timing of symbol
dereferencing isn't it.  Perhaps you are thinking of some other macro
system, not Lisp's.

BTW, In elisp which you refer to, there is no problem with setting a
new symbol, (set 'a-new-symbol some-value), because intern both looks
up a symbol and creates it if it doesn't find it.  

-- 
Tom Breton, http://world.std.com/~tob
Ugh-free Spelling (no "gh") http://world.std.com/~tob/ugh-free.html
From: Marco Antoniotti
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <lwg137mr0z.fsf@copernico.parades.rm.cnr.it>
Tom Breton <···@world.std.com> writes:

> Paul Duffin <·······@mailserver.hursley.ibm.com> writes:
> 
> > Please enlighten me as to what my misconception is. I am obviously not
> > a Lisp expert and I realise that there are plenty of different flavours
> > around but the above code works inside Emacs which says that it is
> > based on Common Lisp.
> [...]
> > > 
> > > >         (defun inner (b)
> > > >           (+ a b))
> > > >
> > > >         (defun outer (a)
> > > >           (inner 3))
> > > >
> > > >         (outer 2)
> 
> In elisp, all variables are "special", which is to say, dynamically
> scoped.  Thus inner can find "a" even tho "a" is declared in outer.
> This is not so in Common Lisp.

Yep.  This is part of the confusion (which, however, stems from the
fact that Tcl is neither properly statically nor properly dynamically
scoped).

> > > > A simple macro version of setq would convert from
> > > >         (setq symbol '(list))
> > > > to
> > > >         (set 'symbol '(list))
> [...]
> > > > A macro setq is not interpreted the same way as set is because if it
> > > > was an error would occur when the Lisp interpreter tried to get the
> > > > value of the variable symbol before symbol was created.
> 
> What seems to be the problem?  An error in the timing of symbol
> dereferencing isn't it.  Perhaps you are thinking of some other macro
> system, not Lisp's.

The problem is that "variable substitution" in Tcl is
shell-like. Check the Tcl manual page for 'while'.

  set x 0
  while {$x < 10} {
     puts "x is $x"
     incr x
  }

yields a different result from	   

  set x 0
  while $x<10 {
     puts "x is $x"
     incr x
  }

So much for "regularity of syntax" of Tcl. (BTW. try to rewrite the
'$x<10' test in the second example as '$x < 10'.)

So, Tcl'ers think of "variable substitution" and mess up the much
simpler and cleaner semantics of Lisp.

Let's repeat it.  Without Tk, we wouldn't be having this discussion at
all.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Tom Breton
Subject: Re: Emacs Lisp (Was: Re: Why is tcl broken?)
Date: 
Message-ID: <m37loellhn.fsf@world.std.com>
······@xarch.tu-graz.ac.at (Reini Urban) writes:

> 
> hmm, wouldn't it be easier to define a new procedure to optionally
> declare some or all innner variables static?
> something like LEXLET / DEFCLOSURE / FUNCTION LAMBDA 
> instead of     LET    / DEFUN      / QUOTE LAMBDA?
> sorry, if such ways already exist in elisp, i don't know it enough.

In elisp, lexical-let in the cl package does what you ask.

-- 
Tom Breton, http://world.std.com/~tob
Ugh-free Spelling (no "gh") http://world.std.com/~tob/ugh-free.html
From: Tom Breton
Subject: Re: Emacs Lisp
Date: 
Message-ID: <m3btdlqpwx.fsf@world.std.com>
Ray Blaak <·····@infomatch.com> writes:

> 
> Dynamic scoping is a fundamental feature of Elisp. *All* (most?) symbols are
> looked up dynamically. 

IIUC, all are.  The lexical-let thing is done with gensym and a macro
trick.

-- 
Tom Breton, http://world.std.com/~tob
Ugh-free Spelling (no "gh") http://world.std.com/~tob/ugh-free.html
From: Greg Ewing
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <37783AA3.CC63664C@compaq.com>
Paul Duffin wrote:
> 
> Actually you find that you can do everything that YOU want to
> in Python and YOU find it easier than doing it in Tcl. I do not
> believe that you can do everything in Python that you can do
> in Tcl (at least as regards extending the language itself).

I meant that Python addresses all the major areas
of application that Tcl's designer claims Tcl was
designed for.

I don't regard extending Tcl as an "application"
of Tcl -- it's just a means to an end. Python
achieves many of the same ends as Tcl using 
different means.

Greg
From: ·······@cas.org
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <7liblm$k0f$1@srv38.cas.org>
According to Paul Duffin  <·······@mailserver.hursley.ibm.com>:
:in Python and YOU find it easier than doing it in Tcl. I do not
:believe that you can do everything in Python that you can do
:in Tcl (at least as regards extending the language itself).

Paul, what about the opposite - can one do everything in Tcl that can
be done in Python - or are there any fundamental features of Python
which could not be added into Tcl via an extension?

-- 
<URL: ··············@cas.org> Quote: Saving the world before bedtime.
<*> O- <URL: http://www.purl.org/NET/lvirden/>
Unless explicitly stated to the contrary, nothing in this posting
should be construed as representing my employer's opinions.
From: Soren Dayton
Subject: Re: Why is tcl broken?
Date: 
Message-ID: <xcdg13qo3io.fsf@sloohah.cs.uchicago.edu>
Greg Ewing <··········@compaq.com> writes:

> Fernando Mato Mira wrote:
> > 
> > I wanted to find out whether the disgust for tcl I've seen many people
> > express had some basis on some generally accepted principles.
> 
> I think I can give some reasonably objective reasons why
> I find that I enjoy using Python in a way that I don't
> enjoy using Tcl. (I have used both, and in one case have
> written versions of the same program in both languages.)
> 
> 1. Syntax
> 
> Tcl's syntax is based on the principle that an unadorned
> sequence of alphanumeric characters represents a literal.
> Anything else (such as referring to a variable) requires
> extra syntax.
> 
> Ousterhout's justification is that this is optimal
> for entering commands interactively, which is probably true.
> However, this seems to me the wrong thing to optimise for,
> given that tcl is meant to be a *programming* language.
> The case being optimised for -- typing in a command which
> is to be used once and thrown away -- simply doesn't occur.
> Programs, even tiny ones, get used more than once!

but it occurs to me that this is a GOOD argument for the best uses of
tcl, that is as an interactive style shell or scriptable configuration
file language. 

tcl _IS_ too annoying to do real programming in, but it is not clear
that it is not _by far_ the best tool for this kind of niche use.  (Yes, 
I've heard about guile, but frankly, tcl looks like my ipfilter
configuration files and we are _NOT_ after expressibility here but a
sort of clarity)

Soren