From: Ray Blaak
Subject: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <m3zof7gvbz.fsf_-_@blight.transcend.org>
Marco Antoniotti <·······@cs.nyu.edu> writes:
> Christian Lynbech <···@tbit.dk> writes:
> >[scheme has...]
> Then why not use CL? (Instead of reimplementing the wheel over and over and
> over and over , until you get the last wheel on the block avec forced
> indentation :) )

Because then they wouldn't be programming in Scheme, of course :-).

You could just as easily say "Java has [feature], but so does CL, so why not
use CL?" But the urge to convert programmers of other languages seems to be
less than that for Scheme.

<SoapBox>

Scheme is just a programming language. People use it for the same reasons they
use any other language: they like it, it meets their needs, whatever.

That there are Scheme freaks who would like to take over the world does not
mean all those who use Scheme are insufferable fools. 

That there are problems with Scheme does not mean all those who use it are
clueless idiots. 

All languages have their rough points, and just how rough is a source of
endless debates, often illuminating, often pointless.

But the level of hostility in this newsgroup to Scheme, and the certainty with
which lispers here know the One True Way is quite astonishing. 

I lurk in a fair number of comp.lang.* newsgroups, and this one seems to me to
have the most intolerance/smugness. Even in comp.lang.scheme, when comparisons
to CL come up, the discussions point out the differences, opinions and
preferences are expressed, but then everyone just moves on to discuss some more
Scheme. No big deal.

I used to think comp.lang.ada was the worst, with everyone there just being
amazed at all the silly people refusing to understand the benefits of salvation
by using Ada (hallelujah). These days, however, they seem to have understood
that other languages have their place, and it is in fact useful to work with
Ada and the other evil languages.

Sometimes I feel like I am at a fundamentalist meeting, where the slightest
voice of dissent will brand you as a heretic.

Lighten up people. Use Common Lisp because you can use no other, but at least
be aware that the other folks' ways can be learned from (even their mistakes).

</SoapBox>

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.

From: Xah Lee
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <B6C1E48F.60DA%xah@xahlee.org>
Lighten up folks, there's no need to be intolerant of sibling's tone. I have
now composed a poem dedicated to _all_ common lispers.

 My name is Scheme
 pure and beautiful,
 but in the real world,
 i rape and beat lisps,
 all of them to death.

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html


> From: Ray Blaak <·····@infomatch.com>
> Newsgroups: comp.lang.lisp
> Date: 27 Feb 2001 22:44:48 -0800
> Subject: Language intolerance (was Re: Is Scheme a `Lisp'?)
From: Janis Dzerins
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <8766hv2jgj.fsf@asaka.latnet.lv>
Xah Lee <···@xahlee.org> writes:

> Lighten up folks, there's no need to be intolerant of sibling's tone. I have
> now composed a poem dedicated to _all_ common lispers.
> 
>  My name is Scheme
>  pure and beautiful,
>  but in the real world,
>  i rape and beat lisps,
>  all of them to death.
   And my name is the Reaper,
   Don't call me a lisp.

-- 
Janis Dzerins

  If million people say a stupid thing it's still a stupid thing.
From: Xah Lee
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <B6C292F7.6113%xah@xahlee.org>
In my last message I dedicated a poem to all common lispers:

 My name is Scheme,
 pure and beautiful,
 but in the real world,
 i rape and beat lisps,
 all of them to death.

Now, i wish to complement it by a dedication to the Scheme dialect
programers:

 The fire in my eyes,
 burning fierce and bright,
 reflecting flare before my eyes,
 Common Lispers' asses alight.

In conclusion to the topic of this thread, i dedicate this one to all
siblings in-fight:

 My brethren lessons a learnt,
 never utter truths a bent,
 lest Xah elder wrath a birth,
 thy buttocks =puff=, ignite.


(translation:
Oh, my dear Common Lisp loving programers, you all have learned a lesson
this week: Never say things you know that's not exactly true, for example:
do not say to your brother: "My mom is not your mom, my dad is not your
dad.". Because if you do, the loving person Xah Lee will know and will be
angry, and make your ugly behavior clearly understood by everybody.
)

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html
From: Eugene Zaikonnikov
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <6y7l2a772w.fsf@viking.cit>
* "Xah" == Xah Lee <···@xahlee.org> writes:

Xah>  but in the real world, i rape and beat lisps, all of them to
Xah>  death.

Many before you tried; most of them rest by gravestones now,
outlived by Lisp. Lisp is the eternal magic of greater orders, and
pity mortals like you may no harm it.

Xah>  The fire in my eyes,
Xah>  burning fierce and bright,
Xah>  reflecting flare before my eyes,
Xah>  Common Lispers' asses alight.

Eat Flaming Death, The lesser being, who Never Heard A Word Of True
Poetry.

DIVERSE IMAGE
(The Song of Expressions)

Eval has a Lispy Heart,
And Macros make a Lispy Face,
Class the Lispy Form Divine,
And Syntax is the Lispy Dress.

The Scheme Dress is Forged Syntax,
The Scheme Form is Tail Recurse,
The Scheme Face is Funcall seal'd,
The Scheme Heart is hungry Cons.


-- 
  Eugene
From: Ray Blaak
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <uitluvijp.fsf@infomatch.com>
Erik Naggum <····@naggum.net> writes:
> [a fair post, except for]
>   Your very bad code examples shows that you are willing to post completely
>   unfounded beliefs, and do not want to make the effort to check your
>   conclusions or indeed premises for correctness.

Not at all. My very bad code example was an honest attempt to learn something
about CL. And I did. There were no beliefs or premises being held. I *knew* it
was incorrect, but assumed people would get the gist of what I was getting at.

If I had actually had a CL on my machine, I could have answered the question
myself. Next time I will likely take the trouble to install CL instead.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.
From: Xah Lee
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <B6C2455A.60F0%xah@xahlee.org>
Dear Erik Naggum,

you wrote:
> I think you're missing an important point.  Whenever anyone tries to sell
> Common Lisp, argue for Common Lisp, etc, the bad experiences people have
> with Scheme become a problem for us before we can move on to do what we
> want.  Whenever some Scheme freak says "Scheme is a Lisp", he holds
> "Lisp" back in the minds of whoever has been exposed to Scheme and thinks
> he knows something about "Lisp", 1960-style.  Bad teachers or pedagogy,
> which only shows people the unique Scheme features, some of which are
> pretty bizarre, making it appear _Lisp_ doesn't have iteration, arrays,
> strings, etc, exacerbate the problem of selling Common Lisp.  In an
> important way, Scheme is _in_the_way_ when a Common Lisp proponent tries
> to talk about his favorite choice.  Because of the need to move Scheme
> out of the way all the time, hostility grows.  This is a situtation the
> Scheme freaks have created all on their own, by insisting that Scheme is
> a Lisp, way beyond any useful comparisons to any other extant Lisps.


That's not Scheme's problem; life's a survivalism. Fight fight fight, fight
fight fight, fight the myths for your life.

(and just don't forget what imperative languagessss do to you and your food
source. Pray that Scheme is everywhere before Common Lisp becomes extinct.
Despite Common Lisper's problems, i betcha ass Scheme is still A Lisp!)

 Xah
 ···@xahlee.org
 http://xahlee.org/PageTwo_dir/more.html


> From: Erik Naggum <····@naggum.net>
> Organization: Naggum Software, Oslo, Norway
> Newsgroups: comp.lang.lisp
> Date: 28 Feb 2001 11:02:37 +0000
> Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
From: Marco Antoniotti
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <y6cwvaav7b5.fsf@octagon.mrl.nyu.edu>
Ray Blaak <·····@infomatch.com> writes:

> Marco Antoniotti <·······@cs.nyu.edu> writes:
> > Christian Lynbech <···@tbit.dk> writes:
> > >[scheme has...]
> > Then why not use CL? (Instead of reimplementing the wheel over and over and
> > over and over , until you get the last wheel on the block avec forced
> > indentation :) )
> 
> Because then they wouldn't be programming in Scheme, of course :-).
> 
> You could just as easily say "Java has [feature], but so does CL, so why not
> use CL?" But the urge to convert programmers of other languages seems to be
> less than that for Scheme.

You are forgetting the main points of the issue.

CL has N (for a very large positive N) features that Scheme simply
*does not have*.  The inverse is essentially limited to call/cc.

Yet the number of hours sunk into making yet another (L)GPL or Open
Source or whatever license Scheme implementation, or Scheme library
covering this or that piece of CL is *staggering*.

This is what irks Common Lispers like me.

Apart from that what you say is true. :)

Finally allow me to add.  I am light hearted when making this comments
on Scheme.  It's like a sport for me :) It's fun and easy.

> <SoapBox>
> 
> Scheme is just a programming language. People use it for the same
> reasons they use any other language: they like it, it meets their
> needs, whatever.
> 
	...
> 
> Lighten up people. Use Common Lisp because you can use no other, but at least
> be aware that the other folks' ways can be learned from (even their
> mistakes).
> 
> </SoapBox>

The last is particularly true.  We learned that a big, fat and
incomplete standard is better than a thin, trimmed and incomplete
standard.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Russell Wallace
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <3aa53059.183877603@news.iol.ie>
On 28 Feb 2001 16:14:38 -0500, Marco Antoniotti <·······@cs.nyu.edu>
wrote:

>CL has N (for a very large positive N) features that Scheme simply
>*does not have*.  The inverse is essentially limited to call/cc.
>
>Yet the number of hours sunk into making yet another (L)GPL or Open
>Source or whatever license Scheme implementation, or Scheme library
>covering this or that piece of CL is *staggering*.

There's a causal relationship between these facts.

Writing your own little language implementation is a valuable learning
experience. But which would you rather try to implement as a hobby
project, Scheme or Common Lisp? :)

-- 
"To summarize the summary of the summary: people are a problem."
···············@esatclear.ie
http://www.esatclear.ie/~rwallace
From: Marco Antoniotti
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <y6cvgpmogat.fsf@octagon.mrl.nyu.edu>
········@esatclear.ie (Russell Wallace) writes:

> On 28 Feb 2001 16:14:38 -0500, Marco Antoniotti <·······@cs.nyu.edu>
> wrote:
> 
> >CL has N (for a very large positive N) features that Scheme simply
> >*does not have*.  The inverse is essentially limited to call/cc.
> >
> >Yet the number of hours sunk into making yet another (L)GPL or Open
> >Source or whatever license Scheme implementation, or Scheme library
> >covering this or that piece of CL is *staggering*.
> 
> There's a causal relationship between these facts.
> 
> Writing your own little language implementation is a valuable learning
> experience. But which would you rather try to implement as a hobby
> project, Scheme or Common Lisp? :)

You can always use Common Lisp to implement a Scheme.  That would be
an educating experience. :)

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Rob Warnock
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <984dpd$1aogg$1@fido.engr.sgi.com>
Marco Antoniotti  <·······@cs.nyu.edu> wrote:
+---------------
| ········@esatclear.ie (Russell Wallace) writes:
| > Writing your own little language implementation is a valuable learning
| > experience. But which would you rather try to implement as a hobby
| > project, Scheme or Common Lisp? :)
| 
| You can always use Common Lisp to implement a Scheme.  That would be
| an educating experience. :)
+---------------

Part V "The Rest of Lisp", Chapter 22 "Scheme: An Uncommon Lisp",
in "Paradigms of Artificial Intelligence Programming: Case Studies
in Common Lisp", by Peter Norvig <URL:http://www.norvig.com/paip.html>.

[Also Chapter 23 "Compiling Lisp", which develops a compiler for the
Scheme in Chapter 22.]


-Rob

-----
Rob Warnock, 31-2-510		····@sgi.com
SGI Network Engineering		<URL:http://reality.sgi.com/rpw3/>
1600 Amphitheatre Pkwy.		Phone: 650-933-1673
Mountain View, CA  94043	PP-ASEL-IA
From: Mike McDonald
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <xmcp6.304$a3.11912@typhoon.aracnet.com>
In article <··················@news.iol.ie>,
	········@esatclear.ie (Russell Wallace) writes:

> Writing your own little language implementation is a valuable learning
> experience. But which would you rather try to implement as a hobby
> project, Scheme or Common Lisp? :)

  As a hobby project? Either ZetaLisp or LispM lisp! (Anyone know of a CL
implementation of flavors?)

  Mike McDonald
  ·······@mikemac.com
From: Kent M Pitman
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <sfwae6yv96u.fsf@world.std.com>
·······@mikemac.com (Mike McDonald) writes:

> (Anyone know of a CL implementation of flavors?)

I vaguely think I might have heard that Franz has one?  (They could comment
better than I could.)

I wrote one for the probably-now-defunct CLOE project at old Symbolics.  
(I don't know what happened to the CLOE assets after the old Symbolics
liquidation, but my default assumption is it went to the new Symbolics.)
It wasn't heavy on error checking but implemented most of the features 
needed for product delivery.  It was kinda fun to write, as I recall.
From: Mike McDonald
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <PRfp6.316$a3.12255@typhoon.aracnet.com>
In article <···············@world.std.com>,
	Kent M Pitman <······@world.std.com> writes:
> ·······@mikemac.com (Mike McDonald) writes:
> 
>> (Anyone know of a CL implementation of flavors?)
> 
> I vaguely think I might have heard that Franz has one?  (They could comment
> better than I could.)

  I should have been more specific, does anyone know of the source to a CL
implementation of flavors? I know of the ACL version and I've played with it
some. (It's a subset of the LispM's. No special instance variables if I
remember right.) I'm currently playing with CMUCL so I'd kind of like to use
that. At one time, there was one in the CMUL lisp archives but I can't get
in lately. Also, the old Franz Lisp had a toy version.

> I wrote one for the probably-now-defunct CLOE project at old Symbolics.  
> (I don't know what happened to the CLOE assets after the old Symbolics
> liquidation, but my default assumption is it went to the new Symbolics.)
> It wasn't heavy on error checking but implemented most of the features 
> needed for product delivery.  It was kinda fun to write, as I recall.

  I got to evaluate CLOE once! Does that count? :-) We wanted to deliver on
Suns instead of PCs so it wasn't for us. We had a UX400 for a while though.

  Mike McDonald
  ·······@mikemac.com
From: Paolo Amoroso
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <zDymOm+3j5HHQ+UMK22OUwa0iD9r@4ax.com>
On Tue, 6 Mar 2001 22:12:09 GMT, Kent M Pitman <······@world.std.com>
wrote:

> I wrote one for the probably-now-defunct CLOE project at old Symbolics.  

What is CLOE?


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Kent M Pitman
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <sfwd7btpgat.fsf@world.std.com>
Paolo Amoroso <·······@mclink.it> writes:

> On Tue, 6 Mar 2001 22:12:09 GMT, Kent M Pitman <······@world.std.com>
> wrote:
> 
> > I wrote one for the probably-now-defunct CLOE project at old Symbolics.  
> 
> What is CLOE?

It stood for Common Lisp Operating Environment.  It was a Symbolics->386
native delivery solution for applications using primarily portable common
lisp.  (It was supposed to have window system support, but that never
worked very well.)  The idea was to develop on the Lisp Machine with a
special environment that was very conservative and signaled lots of errors
to keep you in line so that you developed code that would deliver well on a
386.

The lingering vestige of this today in the Symbolics system is the
CLTL syntax (and accompanying package, which masquerades as the LISP
package when in the CLTL syntax), which is similar to the SCL package,
but is conservative rather than liberal in its interpretation of gray
areas.

Conservative readings of a spec enhance portability outward; liberal readings
enhance inward portability.  The native LispM environment is liberal in
its readings, IMO.
From: Andras Simon
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <vcdwva2fqvq.fsf@russell.math.bme.hu>
·······@mikemac.com (Mike McDonald) writes:


>   As a hobby project? Either ZetaLisp or LispM lisp! (Anyone know of a CL
> implementation of flavors?)

International Allegro CL Trial Edition
6.0 [Linux (x86)] (Dec 7, 2000 16:15)
Copyright (C) 1985-2000, Franz Inc., Berkeley, CA, USA.  All Rights
Reserved.

This copy of Allegro CL is licensed to:
   Andras Simon, Technical University, Budapest


; Loading home .clinit.cl file.
;; Optimization settings: safety 1, space 1, speed 1, debug 2.
;; For a complete description of all compiler switches given the
current
;; optimization settings evaluate (EXPLAIN-COMPILER-SETTINGS).
CL-USER(1): (require :flavors)
; Fast loading from bundle code/flavors.fasl.
;   Fast loading from bundle code/vanilla.fasl.
T
CL-USER(2): 

Andras
From: romeo bernardi
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <Ycfp6.9968$GR5.193616@news2.tin.it>
"Mike McDonald" <·······@mikemac.com> ha scritto nel messaggio
·······················@typhoon.aracnet.com...
> In article <··················@news.iol.ie>,
> ········@esatclear.ie (Russell Wallace) writes:
>
> > Writing your own little language implementation is a valuable learning
> > experience. But which would you rather try to implement as a hobby
> > project, Scheme or Common Lisp? :)
>
>   As a hobby project? Either ZetaLisp or LispM lisp! (Anyone know of a CL
> implementation of flavors?)

It is in the standard distribution of ACL.

P.
From: Will Hartung
Subject: Fundamental CL Core (was Re: Language intolerance)
Date: 
Message-ID: <9868d002su9@news2.newsguy.com>
"Erik Naggum" <····@naggum.net> wrote in message
·····················@naggum.net...
>   Common Lisp.  Implementing a Scheme is no challenge.  Figuring out how
to
>   implement a fundamental core of Common Lisp is quite interesting work,
>   and starting to do it, so you can test your conclusions.

This sort of rings back to the "minimal Lisp" thread of a little while ago.

What is considered the fundamental core of Common Lisp?

I think it's safe to say that the Package System is not a fundamental piece,
CLOS is questionably fundamental, and exceptions, restarts, etc are Core.

CLOS is the best example of something that isn't fundamental, but it doesn't
perform or behave well if it's not really tightly bound into the system. It
works much better when it is fundamental and designed into the
implementation in the first place. No doubt folks like Franz and Xanalys
have specific optimizations for CLOS that won't happen if you just load the
PCL package.

So, what's core?

Regards,

Will Hartung
(·····@msoft.com)
From: Kent M Pitman
Subject: Re: Fundamental CL Core (was Re: Language intolerance)
Date: 
Message-ID: <sfwwva1w4t0.fsf@world.std.com>
"Will Hartung" <·····@msoft.com> writes:

> "Erik Naggum" <····@naggum.net> wrote in message
> ·····················@naggum.net...
> > Common Lisp.  Implementing a Scheme is no challenge.  Figuring out
> > how to implement a fundamental core of Common Lisp is quite
> > interesting work, and starting to do it, so you can test your
> > conclusions.
> 
> This sort of rings back to the "minimal Lisp" thread of a little
> while ago.
> 
> What is considered the fundamental core of Common Lisp?
> 
> I think it's safe to say that the Package System is not a
> fundamental piece,

Why? (I don't have a position on this, I'm just curious why you're so 
definite about it.)

> CLOS is questionably fundamental, and exceptions, restarts, etc are
> Core.
 
Again why?

It's almost as if you're building your idea of core as you go...

> CLOS is the best example of something that isn't fundamental, but it
> doesn't perform or behave well if it's not really tightly bound into
> the system. It works much better when it is fundamental and designed
> into the implementation in the first place. No doubt folks like
> Franz and Xanalys have specific optimizations for CLOS that won't
> happen if you just load the PCL package.
> 
> So, what's core?

This discussion used to come up a lot in my discussion with Eulisp
designers.  I identified two competing meanings of "core" that seemed
incompatible yet held simultaneously by people working together on the
project.  I want to at least observe the potential for people to talk
at crossed purposes:

I think "core" means "encapsulates something that I could not write
myself without going out of the language".  So to me, the function
OPEN is core because without it I'd better have an FFI as core plus
also a manual for every operating system I'm allowed to run in and a
way to detect which operating system is out there so I can call the
right OS-level primitive.  To me, as a user of CL, a language that
doesn't come with OPEN doesn't talk to the file system and is
fundamentally limited in what it can do.  No matter how many macro
packages or higher order functions I write, I'm still stuck not
talking to the operating system.  Ditto for UNWIND-PROTECT.  And
WITHOUT-PREEMPTION or WITHOUT-INTERRUPTS.  And DRAW-LINE.  And so on.
CL, if anything, has too small a core.  The Lisp Machine, I believe,
had my notion of "core".  It knew that anytime there was a primitive
that it didn't give the user access to, it was denying the user
control over the ability to do things in that programming domain.

There is another meaning of "core" that incompatible with this.  It
assumes that there is a generous god out there that makes "libraries"
and is continuously attaching them to your language using some
extralinguistic glue that is not part of your language, but that
causes you to miraculously be able to open files on one day even
though you couldn't the day before.  You just call some new function
OPEN (never mind how it got there) and it would do this service for
you.  In this universe, the notion of "core" is "anything that must be
done with a special form".  As long as something doesn't require new
language glue to express, it is not "core".
From: Will Hartung
Subject: Re: Fundamental CL Core (was Re: Language intolerance)
Date: 
Message-ID: <986lga0hgt@news2.newsguy.com>
"Kent M Pitman" <······@world.std.com> wrote in message
····················@world.std.com...
> "Will Hartung" <·····@msoft.com> writes:
>
> > "Erik Naggum" <····@naggum.net> wrote in message
> > ·····················@naggum.net...
> > > Common Lisp.  Implementing a Scheme is no challenge.  Figuring out
> > > how to implement a fundamental core of Common Lisp is quite
> > > interesting work, and starting to do it, so you can test your
> > > conclusions.
> >
> > This sort of rings back to the "minimal Lisp" thread of a little
> > while ago.
> >
> > What is considered the fundamental core of Common Lisp?
> >
> > I think it's safe to say that the Package System is not a
> > fundamental piece,
>
> Why? (I don't have a position on this, I'm just curious why you're so
> definite about it.)

I think as you mention below, it's what is needed to be done "outside" the
language.

I think that the Package system could be built and defined completely within
CL itself, without having to touch the OS interface, or the compiler, as the
package system appears to "only" affect the Lisp Reader.

> > CLOS is questionably fundamental, and exceptions, restarts, etc are
> > Core.
>
> Again why?

As PCL demonstrates, CLOS can be built up within the lanaguage itself and
simply loaded, but then it's not quite right (as demonstrated by CMUCLs
implementation). If CLOS was there from the start, and compiler is made
aware of it, then it Works Better. So, I think it is arguable that CLOS as
we know it today cannot be done from simply within the language (but I could
be mistaken).

> It's almost as if you're building your idea of core as you go...

I was just trying to come up with an example of current CL constructs that I
felt were on once side of the "core" line, on the line, and on the other
side of the line.

> This discussion used to come up a lot in my discussion with Eulisp
> designers.  I identified two competing meanings of "core" that seemed
> incompatible yet held simultaneously by people working together on the
> project.  I want to at least observe the potential for people to talk
> at crossed purposes:
>
> I think "core" means "encapsulates something that I could not write
> myself without going out of the language".

So, this brings me to, with clarification (thank you Kent), my question.
Erik mentioned the "Fundamental Core".

HASHTABLEs don't make CL what it is. They're nice, and they're in the
standard so they're portable, but if they didn't exist, they're easily
crafted. They're not parts of the core that I think Erik was talking about.

Another example, I think it's pretty clear that you cannot make a CL out of
a Scheme without redoing the implementation, so Scheme is not a "core" of
CL. Perhaps it's close, or moreso, perhaps it's way off but LOOKS close to
the casual observer.

So, I was curious what people thought the CL core was.

I guess in simple terms regarding the LispM, the answer is "Whatever is
written in micro-code" or protoLisp, or whatever they used to bootstrap
those things.

Regards,

Will Hartung
(·····@msoft.com)
From: Lars Lundb�ck
Subject: Re: Fundamental CL Core (was Re: Language intolerance)
Date: 
Message-ID: <3AA78DB4.12818029@era.ericsson.se>
Will Hartung wrote:
>
> > Kent Pitman wrote:
> > 
> > I think "core" means "encapsulates something that I could not write
> > myself without going out of the language".
> 
> So, this brings me to, with clarification (thank you Kent), my question.
> Erik mentioned the "Fundamental Core".
> 

Are you perhaps thinking about some Lisp OS, that "can of worms"? I am
very sure that Erik had no such thing in mind since he has not been
gulled into discussing Common Lisp from that viewpoint, yet the first
paragraph in his post illustrates the Lisp OS complex of problems so
well.

> I guess in simple terms regarding the LispM, the answer is "Whatever is
> written in micro-code" or protoLisp, or whatever they used to bootstrap
> those things.
> 

Are we discussing the language core or a Common Lisp System core?

To see the boundaries of any core, I think one would have to state what
the functionality of that core is to be. The CL specification (meaning
the HyperSpec, I haven't read the ANSI spec) is non-layered. Is it
possible and reasonable to layer the language, so that any function in a
layer only depends on definitions(?) in some lower layer? The bottom
layer(s) may serve as a core _for the language_ but not for the
_system_. That kind of core would have CL code as well as protoLisp, and
some of it would be very hidden from the user.

Anyway, I'm sure you had something in the back of your mind that made
you think in terms of CL Core, perhaps you can tell us?

Regards,
Lars
From: David Thornley
Subject: Re: Fundamental CL Core (was Re: Language intolerance)
Date: 
Message-ID: <Fw9q6.259$Tg.40791@ruti.visi.com>
In article <··········@news2.newsguy.com>,
Will Hartung <·····@msoft.com> wrote:
>
>So, I was curious what people thought the CL core was.
>
>I guess in simple terms regarding the LispM, the answer is "Whatever is
>written in micro-code" or protoLisp, or whatever they used to bootstrap
>those things.
>
Which is not necessarily the core for any other system, and so I don't
see what the relevance to the CL core.

One problem with trying to find the core as what's written in CL and
what's not is that this doesn't map to the user level.  On anything
other than a LispM, say a Mac or Windows or Unix box, there will be
a foreign function interface that may well be accessed by very system-
specific Lisp code.

Consider, for example, Kent's OPEN example.  If I can't open files in
a standard way, then I can't write many useful standard programs.
OPEN, therefore, has to be part of the core*.  Yet it is likely to
be, essentially, a call of some Lisp functions that are system-specific
and intimately familiar with the internals of the FFI.

You could try to define a core as functionality that has to be standard
and cannot be built on other core technology.  For example, WITH-OPEN-FILE
could be defined in terms of file manipulation and UNWIND-PROTECT.
I don't know what removing WITH-OPEN-FILE from the core buys anybody,
thought, since it should be fairly easy to implement.

If the idea is to define a Common Lisp core as base functionality that
needs to be implemented for an installation while providing CL code
for everything else, I don't see what the use is, since the hard part
is in that base functionality.

Nor does it make sense to me to try to draw a line between what can be
implemented in the language without loss of efficiency, since that will
vary from system to system.  CLOS generally benefits from being built-in,
but ISTM that any other part of the language might also, on a system by
system basis.

The best argument I see for defining a "core" to build "libraries" around
is marketing:

*NEW* Lisp 3.0!
Now a dynamic object-oriented language!
Fully compiled for efficiency!
Combining desirable features from C++, Java, Smalltalk, and Scheme into
   a coherent manageable whole!
Here's the language core to look at; the libraries contain exception
   handling, advanced numeric computation, and much much more!

I'm not saying this is a bad idea....

*Embedded Common Lisp, anybody?  So far it seems to me that the embedded
community (odd mental image, that) is probably reluctant to go for
garbage collection and error handling.

--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: George Neuner
Subject: Re: Fundamental CL Core (was Re: Language intolerance)
Date: 
Message-ID: <3aaa89e6.273995194@helice>
On Fri, 09 Mar 2001 18:44:53 GMT, ········@visi.com (David Thornley)
wrote:

>
>*Embedded Common Lisp, anybody?  So far it seems to me that the embedded
>community (odd mental image, that) is probably reluctant to go for
>garbage collection and error handling.
>

The embedded programming community is highly motivated by one or both
of predictable execution time and small runtime footprint.



My current work, industrial vision, is mostly about predictable
execution.  I don't [greatly] care how large the runtime footprint is
so long as the application's time constraints are met.  In a typical
system I have an execution windows ranging from a few to a few hundred
milliseconds for an input varying chain of functions.

However, my mileage is not typical ... the time constraints that I
routinely deal with are *extremely* large compared to most embedded
systems.  Frequently time constraints are measured in microseconds or
at worst a few milliseconds.

Also, given the nature of the business, I can specify a 500Mhz CPU
with 64MB of RAM if I must.  My boss will grumble ... loudly ... about
needing a $1000 [industrial] computer, but may opt for it versus more
development time to squeeze a large application onto a less capable
platform.

However, embedded projects are frequently very small and highly
sensitive to deployment cost.  Many programs use only the internal
scratch RAM of a microcontroller or DSP, ranging from 256 bytes at the
low end to ~32 Kbytes at the very high [read expensive] end.
Practically anything that avoids adding chips or using more expensive
ones will be tolerated - even additional development time to code and
debug in C or [Ugh!] assembler.  Been there, done that.

I have been following the developments in real time GC.  Currently, I
believe that it does work in soft time systems and in hard time
systems with multi-millisecond constraints, but is useless for systems
with more stringent time constraints.  I also believe that the
resource costs, both in CPU cycles and extra RAM, are simply too high
for many uses.



The other problems I see are with familiarity and business culture.
Project managers who haven't already dealt with intelligent devices or
don't have problems that involve high flexibility or complicated
mapping or searching just aren't likely to be familiar with Lisp as a
development language.  You can sometimes educate such people.

Culture is another issue.  For example: my company has developed
vision QA systems that are USFDA approved for pharmaceutical
manufacturing.  My experience is that project managers who answer to
FDA oversight generally consider "lisp" a speech impediment.  Coding
must be done in mainstream, "proven" languages ... ie. *not* Lisp and
one that other vendors have previously recommended.

And please, everybody! ... no flames or war stories about how Lisp
*has* been proven.  You'll get no argument from me!



George Neuner
Automated Visual Inspection Systems, Inc.
===================================================
The opinions expressed herein are my own and do not
reflect the opinions or policies of my employer.
===================================================
From: Christian Lynbech
Subject: Re: Fundamental CL Core (was Re: Language intolerance)
Date: 
Message-ID: <877l1xppfl.fsf@ted.ericsson.dk>
>>>>> "David" == David Thornley <········@visi.com> writes:

David> *Embedded Common Lisp, anybody?  So far it seems to me that the embedded
David> community (odd mental image, that) is probably reluctant to go for
David> garbage collection and error handling.

 =8-0  How can you say that? 

I am part of a project to build a router for the third generation
mobile telephone network and we use both exceptions and garbage
collection. We have written a network management framework together
with agent applications in Scheme and we are currently considering
various ways and possibilities of switching to Common Lisp.

Ok, ok, to be honest I should also mention that we are probably not
very representative of the embedded systems community, that colleagues
and management elsewhere in Ericsson are quite sceptic about the
wisdom of our approach, and that the router is nowhere near being as
"hardcore embedded" as for instance the mobile telephones are.


------------------------+-----------------------------------------------------
Christian Lynbech       | Ericsson Telebit, Skanderborgvej 232, DK-8260 Viby J
Phone: +45 8938 5244    | email: ·················@ted.ericsson.dk
Fax:   +45 8938 5101    | web:   www.ericsson.com
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
                                        - ·······@hal.com (Michael A. Petonic)
From: Russell Wallace
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <3aa662d8.62561860@news.iol.ie>
On 07 Mar 2001 02:27:54 +0000, Erik Naggum <····@naggum.net> wrote:

>* Russell Wallace
>> Writing your own little language implementation is a valuable learning
>> experience. But which would you rather try to implement as a hobby
>> project, Scheme or Common Lisp? :)
>
>  Common Lisp.  Implementing a Scheme is no challenge.  Figuring out how to
>  implement a fundamental core of Common Lisp is quite interesting work,
>  and starting to do it, so you can test your conclusions.  Figuring out
>  the impact of implementing another feature, and like projects, will also
>  yield very valuable information.  Figuring out how to layer features in
>  Common Lisp so you can implement them in stages is also very useful --
>  many domain-specific languages grow that way, and growing implementation
>  from the bottom up and from the top down at the same time will yield a
>  lot of exciting insight.  Then there's realizing the support environment
>  around the language.  How much do you really need to get anywhere is not
>  something people know a priori.

Valid points. However, I'm still of the opinion that for a typical
postgrad student, say, a Scheme is about the right level of challenge;
it's simple enough that you can wrap your mind around the whole
language without needing many years of experience in it, and a full
implementation isn't too much work to be in the scope of what one
person can do.

>  If you don't make the mistake of starting out with a single-namespace
>  Lisp with no real symbols (i.e, a Scheme), you can grow with a lot less
>  pain and suffering than if you start out with a serious design flaw.

Hmm, unless I'm missing something, the main advantage of a separate
function namespace seems to be that it allows generating more
efficient code, without needing declarations or general type
inferencing? For commercial applications this is important, but for
academic purposes it would seem to be much less so.

-- 
"To summarize the summary of the summary: people are a problem."
···············@esatclear.ie
http://www.esatclear.ie/~rwallace
From: Marco Antoniotti
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <y6cwva149vr.fsf@octagon.mrl.nyu.edu>
········@esatclear.ie (Russell Wallace) writes:

> On 07 Mar 2001 02:27:54 +0000, Erik Naggum <····@naggum.net> wrote:
> 
	... stuff Erik wrote deleted ...
> 
> Valid points. However, I'm still of the opinion that for a typical
> postgrad student, say, a Scheme is about the right level of challenge;
> it's simple enough that you can wrap your mind around the whole
> language without needing many years of experience in it, and a full
> implementation isn't too much work to be in the scope of what one
> person can do.

But why shouldn't s/he use Common Lisp to do the implementation?

I strongly advocate this approach.  At least you can get across the
point that Scheme is a strict subset of Common Lisp. (And no! You are not
allowed to mention call/cc in this context :) ).

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Russell Wallace
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <3aa6b900.84620798@news.iol.ie>
On 07 Mar 2001 15:11:36 -0500, Marco Antoniotti <·······@cs.nyu.edu>
wrote:

>But why shouldn't s/he use Common Lisp to do the implementation?

Well, in that case the Common Lisp vendor has already done 99% of the
work for you. In the context of a student project, that's cheating :)

-- 
"To summarize the summary of the summary: people are a problem."
···············@esatclear.ie
http://www.esatclear.ie/~rwallace
From: Marco Antoniotti
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <y6cbsrd41wm.fsf@octagon.mrl.nyu.edu>
········@esatclear.ie (Russell Wallace) writes:

> On 07 Mar 2001 15:11:36 -0500, Marco Antoniotti <·······@cs.nyu.edu>
> wrote:
> 
> >But why shouldn't s/he use Common Lisp to do the implementation?
> 
> Well, in that case the Common Lisp vendor has already done 99% of the
> work for you. In the context of a student project, that's cheating :)

But in that way you are not exposing the student to the simple and
true fact that Scheme is a (very) small subset of Common Lisp.

Apart from that, I would be surprised if an educational institution
today required a student to write from scratch (meaning, in some form
of portable assembler, like C :) ) a Scheme interpreter.  I'd bet that
the standard exercise "let's write a metacircular interpreter in Lisp"
is far more common.  Why not start with CL then?

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Russell Wallace
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <3aa6c42a.87479532@news.iol.ie>
On 07 Mar 2001 18:03:53 -0500, Marco Antoniotti <·······@cs.nyu.edu>
wrote:

>I'd bet that
>the standard exercise "let's write a metacircular interpreter in Lisp"
>is far more common.  Why not start with CL then?

Yes, that's perfectly reasonable.

(Though I suspect implementing call/cc might be a major headache.)

-- 
"To summarize the summary of the summary: people are a problem."
···············@esatclear.ie
http://www.esatclear.ie/~rwallace
From: Marco Antoniotti
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <y6c4rx53zjd.fsf@octagon.mrl.nyu.edu>
········@esatclear.ie (Russell Wallace) writes:

> On 07 Mar 2001 18:03:53 -0500, Marco Antoniotti <·······@cs.nyu.edu>
> wrote:
> 
> >I'd bet that
> >the standard exercise "let's write a metacircular interpreter in Lisp"
> >is far more common.  Why not start with CL then?
> 
> Yes, that's perfectly reasonable.
> 
> (Though I suspect implementing call/cc might be a major headache.)

It is a major headache no matter what.  At that point the level of
complexity is already such that you might as well go for the full
CL. :)

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Kellom{ki Pertti
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <xfzitlk4qwh.fsf@arokyyhky.cs.tut.fi>
Marco Antoniotti <·······@cs.nyu.edu> writes:
> Apart from that, I would be surprised if an educational institution
> today required a student to write from scratch (meaning, in some form
> of portable assembler, like C :) ) a Scheme interpreter.

I would be surprised if more than a handful of the Scheme
implementations around were in fact student projects. My bet is that
apart from the research implementations, the rest were born out of the
joy of hacking. Not that doing research in any way precludes joy of
course.

If I had had access to Scheme literature when I implemented my Lisp
interpreter, I would have implemented Scheme. I didn't, so I ended up
implementing a bastard cousin of CL instead. 

>  I'd bet that the standard exercise "let's write a metacircular
> interpreter in Lisp"
> is far more common.  Why not start with CL then?

The learning experience one gets from implementing garbage collection
and figuring out how to represent cons cells, numbers, etc. at the low
level is considerably different from one would get from writing a
metacircular interpreter in CL.

Do you consider it a bad thing for students to get hands-on experience
with language implementation techniques?
-- 
Pertti Kellom\"aki, Tampere Univ. of Technology, Software Systems Lab
From: Marco Antoniotti
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <y6cwva0us7p.fsf@octagon.mrl.nyu.edu>
Kellom{ki Pertti <··@arokyyhky.cs.tut.fi> writes:

> The learning experience one gets from implementing garbage collection
> and figuring out how to represent cons cells, numbers, etc. at the low
> level is considerably different from one would get from writing a
> metacircular interpreter in CL.

Well.  Is that standard practice?  I bet it is not.  And I still am
convinced that once you get down to GC and things like that, you are
already at such level of complexity that you migth as well go for CL
instead of implementing yet another version of  less usable language
like Scheme.

> Do you consider it a bad thing for students to get hands-on experience
> with language implementation techniques?

No.  I am concerned with the fact that student don't get exposed to
the facts.  Like "Scheme is a small subset of CL" and "CL is more
useful and usable than Scheme".

Cheers


-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Craig Brozefsky
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <87r908f9s4.fsf@piracy.red-bean.com>
Marco Antoniotti <·······@cs.nyu.edu> writes:

> Well.  Is that standard practice?  I bet it is not.  And I still am
> convinced that once you get down to GC and things like that, you are
> already at such level of complexity that you migth as well go for CL
> instead of implementing yet another version of  less usable language
> like Scheme.

I think it's reasonable that one who was interested in doing some
practical GC research for lisp-like languages would have a
significantly easier time dealing with the source code for many of the
free implementations of Scheme, in particular the smaller VM based
ones.  They are shorter, simpler, and easier to bootstrap than the
code bases for the Free CL implementations.  Also, the interface
between the GC subsystem and the rest of the system is often much
simpler.  Scheme48 is one of my favorites for this type of stuff.

This is not a slight to those working on the Free CL implementations.

Also, to fully build a CL implementations requires alot of leg work
that someone who merely wanted to investigate the effects of a
particular GC strategy on a class of programs would not really need to
do to accomplish their goal.

> No.  I am concerned with the fact that student don't get exposed to
> the facts.  Like "Scheme is a small subset of CL" and "CL is more
> useful and usable than Scheme".

For a given set of goals it certainly is, and I use it daily because
of that.  However, for other goals, some of which I also find worthy
and interesting, looking at and hacking on the simpler Scheme
implementations is preferable.

-- 
Craig Brozefsky                             <·····@red-bean.com>
In the rich man's house there is nowhere to spit but in his face
					             -- Diogenes
From: Craig Brozefsky
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <87k85zeog0.fsf@piracy.red-bean.com>
Erik Naggum <····@naggum.net> writes:

> * Craig Brozefsky <·····@red-bean.com>
> > I think it's reasonable that one who was interested in doing some
> > practical GC research for lisp-like languages would have a significantly
> > easier time dealing with the source code for many of the free
> > implementations of Scheme, in particular the smaller VM based ones.
> 
>   This is probably true, but how useful is the result of that work?  How
>   much work will anyone who wants to use it in a real Lisp have to do?  How
>   many important real-world issues did the Scheme user ignore as part of
>   having an easier time?

I thin it is useful because in the past I have run into situations
where I would want to explore a problem a bit more before I dove in
headlong.  In those cases I can use such research work to determine
wether I want to continue, cognizant of the limitations of it's
predictive capabilities because of it's isolation from many real-world
issues that my arise.

-- 
Craig Brozefsky                             <·····@red-bean.com>
In the rich man's house there is nowhere to spit but in his face
					             -- Diogenes
From: ········@hex.net
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <%DCp6.28780$lj4.662430@news6.giganews.com>
Marco Antoniotti <·······@cs.nyu.edu> writes:
> ········@esatclear.ie (Russell Wallace) writes:
> 
> > On 07 Mar 2001 02:27:54 +0000, Erik Naggum <····@naggum.net> wrote:
> > 
> 	... stuff Erik wrote deleted ...
> > 
> > Valid points. However, I'm still of the opinion that for a typical
> > postgrad student, say, a Scheme is about the right level of challenge;
> > it's simple enough that you can wrap your mind around the whole
> > language without needing many years of experience in it, and a full
> > implementation isn't too much work to be in the scope of what one
> > person can do.

> But why shouldn't s/he use Common Lisp to do the implementation?

> I strongly advocate this approach.  At least you can get across the
> point that Scheme is a strict subset of Common Lisp. (And no! You
> are not allowed to mention call/cc in this context :) ).

The downside is that the task is fairly much trivial, largely
exercising one's understanding of CL and Scheme.  It may be more
educational to implement atop a fundamentally "dumber" language in
that you have to build:
 - A garbage collector;
 - A model for mapping stuff like C stack frames onto the "Lispy"
   code storage model;
 - A name space manager...

I'd observe one further thing about the "challenge" of it: since
implementing Scheme (and CL, for that matter) has been done before,
unless there is something Rather Particular about the implementation
approach that is special, this is a task that commonly won't
contribute much to "new learning" in the discipline.

"New learning" is _quite_ necessary to Ph.D studies, but not nearly so
much for M.Sc studies.  As a result, for a Master's student to construct
Yet Another Scheme is probably "educationally adequate."  The same is
not true for a Ph.D student, and I'd definitely distinguish between
the two...
-- 
(reverse (concatenate 'string ··········@" "enworbbc"))
http://vip.hex.net/~cbbrowne/lisp.html
Rules of the Evil Overlord #58.  "If it becomes necessary to escape, I
will  never stop  to  pose  dramatically and  toss  off a  one-liner."
<http://www.eviloverlord.com/>
From: Kent M Pitman
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <sfwd7bs28bp.fsf@world.std.com>
········@hex.net writes:

> "New learning" is _quite_ necessary to Ph.D studies, but not nearly so
> much for M.Sc studies.  As a result, for a Master's student to construct
> Yet Another Scheme is probably "educationally adequate."  The same is
> not true for a Ph.D student, and I'd definitely distinguish between
> the two...

Obviously this is a matter on which people could disagree.  But I
doubt MIT would accept yet another scheme as educationally adequate
for a master's student to spend time on...

I think undergrad work is the place for duplicating both goals and techniques
that have been tried before.  For example, implementing Scheme at all.

I think masters work should seek to apply new techniques to old areas or
old techniques to new areas.  For example, applying some well-understood
register allocation technique or data flow analysis to Scheme when such
has not been done before.

I think phd should be about thinking up a new subject area and
identifying ways of thinking about it.  For example, inventing the
notion of a reflective lisp or some such thing, before such had been
thought of, and then saying which existing techniques for parsing,
compiling, etc. might be relevant or interesting to it, or why they'd
have to be modified to make sense.
From: Craig Brozefsky
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <87itllgr71.fsf@piracy.red-bean.com>
········@esatclear.ie (Russell Wallace) writes:

> >  Common Lisp.  Implementing a Scheme is no challenge.  Figuring out how to
> >  implement a fundamental core of Common Lisp is quite interesting work,
> >  and starting to do it, so you can test your conclusions.  Figuring out
> >  the impact of implementing another feature, and like projects, will also
> >  yield very valuable information.  Figuring out how to layer features in
> >  Common Lisp so you can implement them in stages is also very useful --
> >  many domain-specific languages grow that way, and growing implementation
> >  from the bottom up and from the top down at the same time will yield a
> >  lot of exciting insight.  Then there's realizing the support environment
> >  around the language.  How much do you really need to get anywhere is not
> >  something people know a priori.
> 
> Valid points. However, I'm still of the opinion that for a typical
> postgrad student, say, a Scheme is about the right level of challenge;
> it's simple enough that you can wrap your mind around the whole
> language without needing many years of experience in it, and a full
> implementation isn't too much work to be in the scope of what one
> person can do.

As someone who started with Scheme, and then has seen several people
around me skip Scheme and learn Common Lisp, I just haven't seen this
difference in aquisition and comprehension speed.  Skill aquisition
for the core of either language is very similiar.  The difference is
that the CL system has a complete language built around it, so one can
just read the very excellent documentation for the rest of CL, and not
have to spend time shopping for or building those features all over
again.  

In other words, the parts of the languages which are critical for
learning and have the steepest curve, are similiar between Lisp and
Scheme module some minor details.  But, aquisition time for learning
the language and it's tools sufficient for larger projects is
definetly in CL's favor.

I've built sizeable projects in both languages (the largest by order
of magnitude is a CL app) and I find portable (r5rs and srfi) Scheme
is quite a pain to work in.  Things like slib and guile ease the pain
somewhat, but it's still opressive.

> Hmm, unless I'm missing something, the main advantage of a separate
> function namespace seems to be that it allows generating more
> efficient code, without needing declarations or general type
> inferencing? For commercial applications this is important, but for
> academic purposes it would seem to be much less so.

Well, I like it because it allows me to have intuitive and short names
for my methods and functions, and intuitive and short names for
variables and arguments, and there is no name conflict.  The biggest
example is the list symbol, as it's an intuitive variable name, and a
standard function.

-- 
Craig Brozefsky                             <·····@red-bean.com>
In the rich man's house there is nowhere to spit but in his face
					             -- Diogenes
From: Russell Wallace
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <3aa6bc0e.85403280@news.iol.ie>
On 07 Mar 2001 16:16:50 -0600, Craig Brozefsky <·····@red-bean.com>
wrote:

>In other words, the parts of the languages which are critical for
>learning and have the steepest curve, are similiar between Lisp and
>Scheme module some minor details.  But, aquisition time for learning
>the language and it's tools sufficient for larger projects is
>definetly in CL's favor.

Yep.

However, the context of the discussion is why there are more
implementations of Scheme, and I think you'll agree it's much easier
to _implement_ Scheme than Common Lisp (or make major changes to an
implementation).

-- 
"To summarize the summary of the summary: people are a problem."
···············@esatclear.ie
http://www.esatclear.ie/~rwallace
From: Kent M Pitman
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <sfwelw9347e.fsf@world.std.com>
········@esatclear.ie (Russell Wallace) writes:

> On 07 Mar 2001 02:27:54 +0000, Erik Naggum <····@naggum.net> wrote:
> 
> >* Russell Wallace
> >> Writing your own little language implementation is a valuable learning
> >> experience. But which would you rather try to implement as a hobby
> >> project, Scheme or Common Lisp? :)
> >
> >  Common Lisp.  Implementing a Scheme is no challenge.  Figuring out how to
> >  implement a fundamental core of Common Lisp is quite interesting work,
> >  and starting to do it, so you can test your conclusions.  Figuring out
> >  the impact of implementing another feature, and like projects, will also
> >  yield very valuable information.  Figuring out how to layer features in
> >  Common Lisp so you can implement them in stages is also very useful --
> >  many domain-specific languages grow that way, and growing implementation
> >  from the bottom up and from the top down at the same time will yield a
> >  lot of exciting insight.  Then there's realizing the support environment
> >  around the language.  How much do you really need to get anywhere is not
> >  something people know a priori.
> 
> Valid points. However, I'm still of the opinion that for a typical
> postgrad student, say, a Scheme is about the right level of challenge;
> it's simple enough that you can wrap your mind around the whole
> language without needing many years of experience in it, and a full
> implementation isn't too much work to be in the scope of what one
> person can do.

But what if that's not the nature of the world?  I don't think it is.
The world is not something you can wrap your head around without
needing years of experience.  And every year you waste not having that
experience is a year denied to the world of the time that you might
actually be productive solving the real problems of society instead of
the made-up problems of the ivory tower.  

I'm not knocking research, mind you--I think the ivory tower has a place.
But reimplementing Scheme is not research.  Research has long ago  moved
long beyond that.

I'm curious--I don't actually know--but would this fly in other fields
of science?  Would, for example, a "postgrad" student spend time
wrapping his head around a simplified model of the world?  I know it's
done in Physics sometimes, to simplify the very high number of
variables involved, in ordinary "applied" areas like engineering, for
example.  Would it be a good way to train to be a general purpose
chemist to specialize on only a restricted space and close one's eyes
to what other chemists were doing because it upset their personal
sense of aesthetics?  Would it make you a good doctor to study only
diseases that seemed well-formed because they were easier to wrap
one's head around and it was messy to think about diseases and
syndromes we couldn't quite characterize?  Would it make you a good
lifeguard to study rescue techniques only for people who were going to
be cooperative in their struggle for survival?  Can lawyers study
idealized legal systems that don't have messy problems like judges
with an attitude or juries that don't like or understand the way the
law is written?  Why ought computer science folks get a free pass to be
what amounts to techno-bums and not have to take grief for it? ;-)

Society makes a big investment in training and teaching people.  One
then has a debt to society to pay back.  Is this debt well-served by
people doing the self-indulgent thing of one after another solving
problems that others have already confronted and patting oneself on
the back about it?  That seems to me exactly the appropriate role of a
"pregrad", not a "postgrad".  One graduates having demonstrated mastry
in what is known, and moves on to what is not known.

It's been said before but it bears repeating:  The world does not need yet
another GPL'd scheme implementation.  There are enough.  There are many
more Scheme than CL implementations, and there are enough CL implementations.
Languages don't need to be implemented and reimplemented and reimplemented.
They need to be used.  

I suspect people like implementing Scheme because it is easier to implement
than to use.

CL is definitely easier to use than to implement.  I think that's more
appropriate design.  The hard work has been factored out as a loop
invariant and is done only once.  Design once, use many times.
From: Russell Wallace
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <3aa6b721.84141950@news.iol.ie>
On Wed, 7 Mar 2001 16:59:33 GMT, Kent M Pitman <······@world.std.com>
wrote:

>I'm curious--I don't actually know--but would this fly in other fields
>of science?

Well, we start off by teaching Newton's laws of physics and the ball
and stick model of chemistry rather than the more complex general
laws, because they're simpler, therefore easier to deal with. One
can't restrict oneself to the simple models forever, but they do have
their uses.

>Society makes a big investment in training and teaching people.  One
>then has a debt to society to pay back.  Is this debt well-served by
>people doing the self-indulgent thing of one after another solving
>problems that others have already confronted and patting oneself on
>the back about it?  That seems to me exactly the appropriate role of a
>"pregrad", not a "postgrad".  One graduates having demonstrated mastry
>in what is known, and moves on to what is not known.

A valid point; one could reasonably argue that writing a Scheme
implementation is a good undergrad project, but by the time you've
graduated you should be spending your time inventing new wheels rather
than reinventing old ones.

>I suspect people like implementing Scheme because it is easier to implement
>than to use.

That's a major part of it. Also it's simpler, therefore easier to
tinker with.

Let's take objects, for example. If you want to get on with writing OO
code in a Lisp family language, obviously the sensible thing to do is
to use CLOS which already exists and works well. But what if you want
to experiment with a new object system design you've thought of?
Writing or modifying a Scheme implementation for the purpose is a
reasonable thing to do.

>CL is definitely easier to use than to implement.  I think that's more
>appropriate design.  The hard work has been factored out as a loop
>invariant and is done only once.  Design once, use many times.

I agree.

-- 
"To summarize the summary of the summary: people are a problem."
···············@esatclear.ie
http://www.esatclear.ie/~rwallace
From: Marco Antoniotti
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <y6celw9427y.fsf@octagon.mrl.nyu.edu>
········@esatclear.ie (Russell Wallace) writes:

	...

> >I suspect people like implementing Scheme because it is easier to implement
> >than to use.
> 
> That's a major part of it. Also it's simpler, therefore easier to
> tinker with.

Of course.  It is very easy to tinker with a CL implementation of
Scheme. :)

> Let's take objects, for example. If you want to get on with writing OO
> code in a Lisp family language, obviously the sensible thing to do is
> to use CLOS which already exists and works well. But what if you want
> to experiment with a new object system design you've thought of?
> Writing or modifying a Scheme implementation for the purpose is a
> reasonable thing to do.

If you "have a new idea about a new OO idea" it is much easier to
tinker with a CL implementation of the new OO idea that to modify a
Scheme implementation.  That is a given.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Pierre R. Mai
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <87wva0xpkf.fsf@orion.bln.pmsf.de>
········@esatclear.ie (Russell Wallace) writes:

> That's a major part of it. Also it's simpler, therefore easier to
> tinker with.
> 
> Let's take objects, for example. If you want to get on with writing OO
> code in a Lisp family language, obviously the sensible thing to do is
> to use CLOS which already exists and works well. But what if you want
> to experiment with a new object system design you've thought of?
> Writing or modifying a Scheme implementation for the purpose is a
> reasonable thing to do.

I think you picked an especially bad example.  I haven't seen many new
approaches to OO languages based on Scheme.  What I have seen are that
most of the main ideas were either implemented on top of a non-scheme
Lisp (i.e. predecessors to CL), or in totally different languages.
Scheme OO-systems on the other hand have mostly been reduced
reimplementations of other OO systems.

Furthermore if I were to tinker with new OO-approaches, I'd do it
using a good CL implementation that has a powerful MOP, since that
will allow me to concentrate on the new stuff, without redoing the
basic framework.  For example, using the MOP, it was easy to implement
a prototype for a XML transformation "sub-language" embedded in CL,
that used predicate-dispatching as its main language feature.  The
whole toolkit, including XML parsing (using expat for the low-level
parsing) and rendering, as well as a small library of useful XML/Tree
predicates, and several worked out examples, weighs in at around 1300
LoC.  And while the predicate-dispatching approach is the best
approach to XML transformation I've seen so far, we were finally
convinced that XML transformation is the wrong solution to most
problems anyway... ;)

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Kent M Pitman
Subject: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <sfwk860mftg.fsf_-_@world.std.com>
"Pierre R. Mai" <····@acm.org> writes:

> [...] using the MOP, it was easy to implement
> a prototype for a XML transformation "sub-language" embedded in CL,
> that used predicate-dispatching as its main language feature.  The
> whole toolkit, including XML parsing (using expat for the low-level
> parsing) and rendering, as well as a small library of useful XML/Tree
> predicates, and several worked out examples, weighs in at around 1300
> LoC.  And while the predicate-dispatching approach is the best
> approach to XML transformation I've seen so far, we were finally
> convinced that XML transformation is the wrong solution to most
> problems anyway... ;)

Can you elaborate on this conclusion a bit?  Do you have a theory of
what is preferred or just a sense that this is bankrupt?

Seems to me Lisp needs to embrace XML somehow but there are obviously
many ways to do this.  I'm therefore curious about experiences both good
and bad, as I imagine are others.  So anything you wanted to share by
way of summary, I'd certainly find interesting.
From: Pierre R. Mai
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <87lmpix3ml.fsf@orion.bln.pmsf.de>
Kent M Pitman <······@world.std.com> writes:

Sorry for the very late response, but the past weeks have been a bit
hectic, and I wanted to do justice to your question, and that needed a
bit of time, which I lacked...

> "Pierre R. Mai" <····@acm.org> writes:
> 
> > [...] using the MOP, it was easy to implement
> > a prototype for a XML transformation "sub-language" embedded in CL,
> > that used predicate-dispatching as its main language feature.  The
> > whole toolkit, including XML parsing (using expat for the low-level
> > parsing) and rendering, as well as a small library of useful XML/Tree
> > predicates, and several worked out examples, weighs in at around 1300
> > LoC.  And while the predicate-dispatching approach is the best
> > approach to XML transformation I've seen so far, we were finally
> > convinced that XML transformation is the wrong solution to most
> > problems anyway... ;)
> 
> Can you elaborate on this conclusion a bit?  Do you have a theory of
> what is preferred or just a sense that this is bankrupt?

It's my opinion that direct XML to XML (or other formats)
transformations are completely the wrong way to go about handling
data.  It seems to me that this whole approach flies in the face of
what we have learned in the past several decades about data
representation.

Just like one doesn't equate internal and external representations by
dumping out parts of main-memory and calling the result an external
file (many "modern" applications notwithstanding), one shouldn't
equate internal and external representations by working directly on
what is, for all intents and purposes a 1:1 mapping of an external
representation.

Yet somehow (through the idea of style-sheets and transformation
languages separate from programming languages) it has become
fashionable to do just that, by turning XML into a more or less 1:1
in-memory tree-representation (whether you call it DOM or the groves
of DSSSL/HyTime/... seems irrelevant here), and writing code that
works directly on this 1:1 mapping.  All domain-specific (or even just
DTD-specific) information and abstractions are not mirrored in the
data-structures, but are embedded in the application code.  This is
like using c[ad]*r/rplac[ad] and lists all over the place, instead of
structures or classes and accessors.

If we look at other tree (or graph) transforming programs, like
e.g. a compiler, we see that very few compilers will work on the
abstract syntax tree directly in any but the first few stages.  After
that more specific data-structures are found, that are adapted to
processing demands, _and_ that capture domain-specific abstractions in
_one_ place.  Indeed often the selection of the right internal
representations is the most important design work, and once you get
that right, the code follows pretty easily (_if_ you've found just the
right representation).

Thus restricting oneself to general 1:1 internal representations of
XML seems not a good idea(tm).  It IMHO turns into a disaster once
dynamism enters the picture, if we e.g. get evolving data schemas,
etc.

And that's the reason why I've come to the conclusion that the
predicate-dispatching (with user-extensible patterns as predicates)
sub-language approach is flawed:  It indirectly encourages staying
close to the XML data-structures, because it becomes so easy to work
with them.  In that sense it is pretty similar to regular expressions:
Since they make the wrong approach so easy and cheap to use (at
first), they encourage working this way, until it is too late, and the
damage (to both the project, and the mind of the programmer) has been
done.

IMHO the only sensible way of dealing with *ML (and especially with
XML) is to handle them the orthodox way, by treating them just like
any other application-specific external representation.  Parse them,
using any applicable domain knowledge, turn them into an
application-specific internal representation, and forget that they
ever had anything to do with *ML.  The same way for the other
direction.  Especially forget about DTDs, etc.

And for that approach, you only need a sufficiently usable interface
to your low-level parser (under no circumstances be tempted to write
the low-level parser yourself.  You haven't seen the horrors of
parsing until you've seen the horrors of SGML parsing.  And XML is
quickly headed the same way, through the layered standards like
XMLNS), for which something like SAX is totally sufficient.

> Seems to me Lisp needs to embrace XML somehow but there are obviously
> many ways to do this.  I'm therefore curious about experiences both good
> and bad, as I imagine are others.  So anything you wanted to share by
> way of summary, I'd certainly find interesting.

Other than in a marketing sense, I don't think that Lisp really needs
to embrace XML in any fundamental way.  If anything then some of the
layered stuff is interesting, like e.g. SOAP (once they have really
materialised, that is).  But it seems to me that CL should interface
to those protocols in a way that hides XML completely.  Just like a
programmer using CORBA shouldn't have to deal with the IIOP and its
on-the-wire data-encoding, a user of SOAP (or similar stuff) shouldn't
ever need to know that XML is used.

Indeed I kind of suspect that if any new and really useful mechanisms
are developed on top of XML, one of the first things that will happen
once they get more wide-spread acceptance will be to decouple them
from the underlying encoding, so that application/environment-specific
encodings can be substituted.

Most of what I've written in this posting seems to be obvious, and
indeed it should be.  Nevertheless, most of the uses of *ML I've seen
over the years haven't followed this simple path, but rather let *ML
seep into the application's core.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Kent M Pitman
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <sfwr8zazn1v.fsf@world.std.com>
"Pierre R. Mai" <····@pmsf.de> writes:

> 
> Kent M Pitman <······@world.std.com> writes:
> 
> Sorry for the very late response, but the past weeks have been a bit
> hectic, and I wanted to do justice to your question, and that needed a
> bit of time, which I lacked...
> 
> > "Pierre R. Mai" <····@acm.org> writes:
> > 
> > > [...] using the MOP, it was easy to implement
> > > a prototype for a XML transformation "sub-language" embedded in CL,
> > > that used predicate-dispatching as its main language feature.  The
> > > whole toolkit, including XML parsing (using expat for the low-level
> > > parsing) and rendering, as well as a small library of useful XML/Tree
> > > predicates, and several worked out examples, weighs in at around 1300
> > > LoC.  And while the predicate-dispatching approach is the best
> > > approach to XML transformation I've seen so far, we were finally
> > > convinced that XML transformation is the wrong solution to most
> > > problems anyway... ;)
> > 
> > Can you elaborate on this conclusion a bit?  Do you have a theory of
> > what is preferred or just a sense that this is bankrupt?
> 
> It's my opinion that direct XML to XML (or other formats)
> transformations are completely the wrong way to go about handling
> data.  It seems to me that this whole approach flies in the face of
> what we have learned in the past several decades about data
> representation.
> 
> Just like one doesn't equate internal and external representations by
> dumping out parts of main-memory and calling the result an external
> file (many "modern" applications notwithstanding), one shouldn't
> equate internal and external representations by working directly on
> what is, for all intents and purposes a 1:1 mapping of an external
> representation.

Doesn't unix do precisely this.  That is, the system-wide paradigm is 
pipe.  Pipe externalizes to text and the next program re-internalizes to
memory, then outputs to text, so the next thing can parse back.  It would
seem inefficient, but it's relatively robust in spite of that.  About the
only thing one loses on are (a) speed [mostly something people seem willing
to pay form] and (b) that you have to write a lot of parsers [which xml
disposes of by adding structure to text files].

> Yet somehow (through the idea of style-sheets and transformation
> languages separate from programming languages) it has become
> fashionable to do just that, by turning XML into a more or less 1:1
> in-memory tree-representation (whether you call it DOM or the groves
> of DSSSL/HyTime/... seems irrelevant here), and writing code that
> works directly on this 1:1 mapping.  All domain-specific (or even just
> DTD-specific) information and abstractions are not mirrored in the
> data-structures, but are embedded in the application code.  This is
> like using c[ad]*r/rplac[ad] and lists all over the place, instead of
> structures or classes and accessors.

Perhaps, though XML can certainly be mapped to native data structures.
I think a better analogy is to say XML is defaultly like STRUCTURE-OBJECT
(well, not accessed the same, but the point is that it has a primitive
notation, as with #S, that is universal and takes a certain storage, even
if some things like booleans and bytes could in principle be packed better).

> If we look at other tree (or graph) transforming programs, like
> e.g. a compiler, we see that very few compilers will work on the
> abstract syntax tree directly in any but the first few stages.  After
> that more specific data-structures are found, that are adapted to
> processing demands, _and_ that capture domain-specific abstractions in
> _one_ place.  Indeed often the selection of the right internal
> representations is the most important design work, and once you get
> that right, the code follows pretty easily (_if_ you've found just the
> right representation).
> 
> Thus restricting oneself to general 1:1 internal representations of
> XML seems not a good idea(tm).  It IMHO turns into a disaster once
> dynamism enters the picture, if we e.g. get evolving data schemas,
> etc.

Restricting oneself to this, certainly.  But having it as a fallback
just as we have lists to fall back to in Lisp so that people can, if
they want, write transforms that use that kind of data...
 
I've often said: the key to intelligent behavior isn't picking a right
representation, but having a repertoire and knowing when to shift.
Each has its virtues.  And having a general purpose tree transformation
facility seems an important arrow for the quiver...

> And that's the reason why I've come to the conclusion that the
> predicate-dispatching (with user-extensible patterns as predicates)
> sub-language approach is flawed:  It indirectly encourages staying
> close to the XML data-structures, because it becomes so easy to work
> with them.  In that sense it is pretty similar to regular expressions:
> Since they make the wrong approach so easy and cheap to use (at
> first), they encourage working this way, until it is too late, and the
> damage (to both the project, and the mind of the programmer) has been
> done.

This is certainly an interesting claim.  It has a realistic sound to it,
though I've not seen it happen a lot in practice.  Do you think this happens
with Lisp and, say, standard-object? (I think the answer is "yes".)  If
so, do you think it's a problem? (I think the answer is "no.")
(But I don't have a strong position on this, just a "default position", and
I'm both open to and interested in reasoned or experiential arguments
about this.)

> IMHO the only sensible way of dealing with *ML (and especially with
> XML) is to handle them the orthodox way, by treating them just like
> any other application-specific external representation.  Parse them,
> using any applicable domain knowledge, turn them into an
> application-specific internal representation, and forget that they
> ever had anything to do with *ML.  The same way for the other
> direction.  Especially forget about DTDs, etc.

To an extent, I don't disagree with this.  I guess I just think it's like
Lisp read/print.  Yes, when you read something in, you might do other things
internally in your programs.  That is, not all Lisp is lists just because
programs are lists.  Yet, it's still useful to have lists as a tool for
those programs that want them.  I had the impression you were saying that
it was bad to do tree transformation in the first message.  Maybe you are
saying that or maybe you are only saying it's bad to rely on general purpose
structures as your only option.  There's a difference between making sure
everything implements the PRINT-XML generic function and saying that 
everything is implemented as a subclass of XML-CLASS.

> And for that approach, you only need a sufficiently usable interface
> to your low-level parser (under no circumstances be tempted to write
> the low-level parser yourself.  You haven't seen the horrors of
> parsing until you've seen the horrors of SGML parsing.

Yeah, I've written an SGML parser.  (I'm convinced by that that the SGML
designers didn't understand the difference between parsing and evaluation,
btw.  XML at least partly, though not entirely, fixed that.)

> And XML is
> quickly headed the same way, through the layered standards like
> XMLNS), for which something like SAX is totally sufficient.

Is XMLNS the namespace system?  If so, I agree that's a mess. It's like
CL symbols, only without symbol sharing.  What a disaster that is.  But
that's the main problem I've seen with it--have you seen other problems
as well?

> > Seems to me Lisp needs to embrace XML somehow but there are obviously
> > many ways to do this.  I'm therefore curious about experiences both good
> > and bad, as I imagine are others.  So anything you wanted to share by
> > way of summary, I'd certainly find interesting.
> 
> Other than in a marketing sense, I don't think that Lisp really needs
> to embrace XML in any fundamental way.

I'd like to see it have PRINT-XML methods .. and some way to establish 
xml-external to lisp-internal mappings .. and perhaps *PRINT-XML-READABLY*.

> If anything then some of the
> layered stuff is interesting, like e.g. SOAP (once they have really
> materialised, that is).

I've gotten a book on SOAP but never slogged through it.  I don't know why
people that make this stuff can never just give a capsule summary and why
I have to read huge big books.  Can you summarize how SOAP works in some
way that is both brief and conceptually productive in terms of understanding
what it's really doing?

> But it seems to me that CL should interface
> to those protocols in a way that hides XML completely.  Just like a
> programmer using CORBA shouldn't have to deal with the IIOP and its
> on-the-wire data-encoding, a user of SOAP (or similar stuff) shouldn't
> ever need to know that XML is used.

I guess I think it's the same virtue to deal with XML as for Lisp there is
s-expression. In essence, I think XML is s-expression for non-lispers.
Do you disagree?  And as such, it seems worth doing for its own shape.

I recall some years back a conflict on the lisp machine about whether to
represent variuos in-file data structures as "lispy" or "generic".  e.g.,
DEFSYSTEM's version database.  It might be nice for file snapshotting for
non-lisp tools to read the data.  But it's a pain to write parsers/printers.
At least XML means you can do that once and not have to repeat it for each
external tool's preferred data style.

> Indeed I kind of suspect that if any new and really useful mechanisms
> are developed on top of XML, one of the first things that will happen
> once they get more wide-spread acceptance will be to decouple them
> from the underlying encoding, so that application/environment-specific
> encodings can be substituted.

Certainly I'm not a fan of DOM-only internal representations if that's
what this is saying.
 
> Most of what I've written in this posting seems to be obvious, and
> indeed it should be.  Nevertheless, most of the uses of *ML I've seen
> over the years haven't followed this simple path, but rather let *ML
> seep into the application's core.

Do you mean they have let DOM become the data structure of choice?  If not,
I'm missing your point here [obvious or not].
 
> Regs, Pierre.

Thanks for replying even after all this time.  I'm quite interested
to hear your followups on this whenever you have the time.
From: Hartmann Schaffer
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <slrn9cijc7.qen.hs@paradise.nirvananet>
In article <···············@world.std.com>, Kent M Pitman wrote:
> ...
>Doesn't unix do precisely this.  That is, the system-wide paradigm is 
>pipe.  Pipe externalizes to text and the next program re-internalizes to
>memory, then outputs to text, so the next thing can parse back.  It would
>seem inefficient, but it's relatively robust in spite of that.  About the
>only thing one loses on are (a) speed [mostly something people seem willing
>to pay form] and (b) that you have to write a lot of parsers [which xml
>disposes of by adding structure to text files].

i suspect most programs do that, but there are no restrictions what you can
feed through a pipe

hs
From: Pierre R. Mai
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <87d7aep1iz.fsf@orion.bln.pmsf.de>
Kent M Pitman <······@world.std.com> writes:

> > It's my opinion that direct XML to XML (or other formats)
> > transformations are completely the wrong way to go about handling
> > data.  It seems to me that this whole approach flies in the face of
> > what we have learned in the past several decades about data
> > representation.
> > 
> > Just like one doesn't equate internal and external representations by
> > dumping out parts of main-memory and calling the result an external
> > file (many "modern" applications notwithstanding), one shouldn't
> > equate internal and external representations by working directly on
> > what is, for all intents and purposes a 1:1 mapping of an external
> > representation.
> 
> Doesn't unix do precisely this.  That is, the system-wide paradigm is 
> pipe.  Pipe externalizes to text and the next program re-internalizes to
> memory, then outputs to text, so the next thing can parse back.  It would
> seem inefficient, but it's relatively robust in spite of that.  About the
> only thing one loses on are (a) speed [mostly something people seem willing
> to pay form] and (b) that you have to write a lot of parsers [which xml
> disposes of by adding structure to text files].

It doesn't dispose of writing the parsers, or at least it shouldn't.
Just because XML adds a little standardised structure to the external
format, that doesn't mean that the internal format should be an
isomorphic mapping of that external structure, except under very
special circumstances.  Especially since XML is only like very
restricted s-exps, i.e. s-exps which only know lists, strings, and
_maybe_ keywords/symbols.

Any program that does something remotely useful with the data it gets
via XML, will have to reinterpret that data using syntactic and
semantic knowledge that is not present in the XML data itself.  It can
either do this by parsing the XML into data-structures that can
represent that additional knowledge, which I'd claim is the right way
to do it, or it can distribute that knowledge throughout the program
by including it in all functions that touch the data, which is what
working directly on 1:1 mappings of XML will do.

> Perhaps, though XML can certainly be mapped to native data structures.
> I think a better analogy is to say XML is defaultly like STRUCTURE-OBJECT
> (well, not accessed the same, but the point is that it has a primitive
> notation, as with #S, that is universal and takes a certain storage, even
> if some things like booleans and bytes could in principle be packed better).

But it doesn't encode the fact that something is a boolean, byte or
whatever.  It only really knows about (P)CDATA, i.e. strings.  Any
knowledge that some attribute value or element content is a boolean is
completely outside of XML itself.  This is in stark contrast to CL's
s-exps, which specify exactly the type of data contained there-in.
XML is much more low-level than this.

Working directly on a 1:1 mapping of XML is more like treating

#S(DEMO :KEY1 VALUE1 :KEY2 VALUE2)

not as a structure of type demo with fields key1 and key2, but as a
list, whose first element encodes the structure type, whose fields
are encoded by an p-list with strings (or sub-lists) as values.  While
the reader function for structures probably does exactly this, it will
immediately transform this intermediate representation into the
structure representation that the application will see.

So in order to get from the intermediate representation to something
really useful, requires additional parsing/processing.  In that way it
is little different from other external formats.

> Restricting oneself to this, certainly.  But having it as a fallback
> just as we have lists to fall back to in Lisp so that people can, if
> they want, write transforms that use that kind of data...

I agree that having a 1:1 mapping from XML to some simplistic
list-based format is a nice thing to have, in that it is one of two
useful interfaces to an XML parser (the other being event-based).  It
can also help you to do simplistic transforms easily (similarly to sed
or perl), and it is a useful stepping stone for writing the parsers
that transform this into more useful internal representations.

Where I get reservations is when one then goes on to create
specialised mechanisms for working directly with this 1:1 mapping.  If
this isn't carefully balanced with other language features and tools,
my experience is that one falls into the same trap that IMHO perl has
fallen into, by centering too much on regular expressions (which are
very, very powerful in Perl, whereas other features are quite a bit
less powerful, and more verbose), thereby effectively driving people
to work with constructs that are IMHO too low-level.

> I've often said: the key to intelligent behavior isn't picking a right
> representation, but having a repertoire and knowing when to shift.
> Each has its virtues.  And having a general purpose tree transformation
> facility seems an important arrow for the quiver...

My current impression is that a facility that eases transforming the
1:1 mapping into specialised internal representations is the more
important tool.  Only or mostly providing the GP tree transformation
arrow is like giving your knights the best armours, when the long-bow
has already been invented for centuries...

> > And that's the reason why I've come to the conclusion that the
> > predicate-dispatching (with user-extensible patterns as predicates)
> > sub-language approach is flawed:  It indirectly encourages staying
> > close to the XML data-structures, because it becomes so easy to work
> > with them.  In that sense it is pretty similar to regular expressions:
> > Since they make the wrong approach so easy and cheap to use (at
> > first), they encourage working this way, until it is too late, and the
> > damage (to both the project, and the mind of the programmer) has been
> > done.
> 
> This is certainly an interesting claim.  It has a realistic sound to it,
> though I've not seen it happen a lot in practice.  Do you think this happens

One example would be Perl and regular expressions.

> with Lisp and, say, standard-object? (I think the answer is "yes".)  If

I'm certain that people sometimes rely on standard-class where a
specialised meta-class might be more appropriate. But...

> so, do you think it's a problem? (I think the answer is "no.")

I agree that it probably isn't a problem, or at least not a huge
problem.  But I think that this is for a number of reasons, which
don't apply to XML automatically:

- standard-class and its facilities are much higher-level than 1:1
  XML, therefore the damage done by remaining at this level is
  probably less serious.

- CL (with the addition of the MOP) provide similarly powerful tools
  for working with non-standard-class.

- Going down the ladder (to a level that is IMHO more comparable with
  XML), while CL provides good support for working with lists and
  property lists, the support for working with structures and objects
  is at least as good.

- Furthermore CL even provides facilities for treating lists as
  structures (or vice-versa), thereby allowing easy transitions
  between high- and low-level representations.

- CL doesn't provide a specialised transformation facility for lists
  which doesn't apply to standard-class (or vice-versa).

To quote yourself, I agree that it is the ecological landscape of a
language that is important here, the balance between features and their
interactions, and not the presence or absence of a particular feature
in itself.

So what I'm really rebelling against is IMHO a strong imbalance in
most current approaches to working with XML, which concentrate too
much on the low-level representation, thereby creating tendencies to
stay at that level long after it is useful.  Building ever more
specialised low-level transformation facilities IMHO just increases
this imbalance, rather than decreasing it, at least if one isn't very
careful about integration with the rest of the language.

> > IMHO the only sensible way of dealing with *ML (and especially with
> > XML) is to handle them the orthodox way, by treating them just like
> > any other application-specific external representation.  Parse them,
> > using any applicable domain knowledge, turn them into an
> > application-specific internal representation, and forget that they
> > ever had anything to do with *ML.  The same way for the other
> > direction.  Especially forget about DTDs, etc.
> 
> To an extent, I don't disagree with this.  I guess I just think it's like
> Lisp read/print.  Yes, when you read something in, you might do other things

But XML parsing/printing _in itself_ does much less than read/print.

> internally in your programs.  That is, not all Lisp is lists just because
> programs are lists.  Yet, it's still useful to have lists as a tool for
> those programs that want them.  I had the impression you were saying that
> it was bad to do tree transformation in the first message.  Maybe you are

Not tree transformations itself.  I just consider it bad to provide a
tree-transformation facility that works only/best with the 1:1 XML
representation.

> saying that or maybe you are only saying it's bad to rely on general purpose
> structures as your only option.  There's a difference between making sure

general purpose nested lists with only strings as leaves.

> everything implements the PRINT-XML generic function and saying that 
> everything is implemented as a subclass of XML-CLASS.

Indeed.  I have no problem with application-specific PRINT-XML or
READ-FROM-XML methods, since they will be working with high-level
objects, turning them into XML, or parsing them from XML, thereby
leaving the restrictive structure of XML behind.  And I do think that
tools that assist in writing such methods are valuable additions.

What I do criticise is working with instances of XML-CLASS
(you can't really subclass XML-CLASS without embedding
application-specific knowledge) all over the place.

> > And XML is
> > quickly headed the same way, through the layered standards like
> > XMLNS), for which something like SAX is totally sufficient.
> 
> Is XMLNS the namespace system?  If so, I agree that's a mess. It's like

Yes.

> CL symbols, only without symbol sharing.  What a disaster that is.  But
> that's the main problem I've seen with it--have you seen other problems
> as well?

Not really a parsing problem, but the scoping rules of namespaces
don't mesh all that well with the scoping rules of most programming
languages in my experience.  XMLNS in effect allows you to "create"
and reference different namespaces through the same prefixes in nested
structures, e.g.: 

<abc:foo xmlns:abc="bla">
  <abc:foo xmlns:abc="blub">
    <def:foo xmlns:def="bla"/>
  </abc:foo>
</abc:foo>

which fully resolved would be

<bla#foo><blub#foo><bla#foo></bla#foo></blub#foo></bla#foo>

> I'd like to see it have PRINT-XML methods .. and some way to establish 
> xml-external to lisp-internal mappings .. and perhaps *PRINT-XML-READABLY*.

I'd actually agree with all of those, though I wouldn't call that
embracing XML, and I think that luckily CL is powerful enough to let
users add this kind of support.  Especially the second point is
something that is what I want:  Ways to transform XML (either external
or via XML-internal) into application objects.  This is far preferable
to letting users work on XML directly.  They usually don't work on
e.g. ASN.1 directly, either.

> I've gotten a book on SOAP but never slogged through it.  I don't know why
> people that make this stuff can never just give a capsule summary and why
> I have to read huge big books.  Can you summarize how SOAP works in some
> way that is both brief and conceptually productive in terms of understanding
> what it's really doing?

SOAP (descended from XML-RPC) is just another RPC mechanism, together
with certain data-marshalling rules, using XML as its low-level on
the wire protocol framework.  In fact by adding those data-marshalling
rules, they add some of the parts that XML is missing vis-a-vis s-exprs.

The interesting thing about SOAP is that it might be more practicable
to deploy on the Internet than e.g. CORBA still is.  In that way it
might entice service providers to publish useful programatic
interfaces to their services, which can then be easily tapped from
languages like CL.  At least that's the idea (there are certain other
uses of SOAP looming on the horizon, if one believes those backing
SOAP).

Of course CORBA promised similar things, and SOAP still is at the
stage that CORBA was at beginning of the 90s...

> I guess I think it's the same virtue to deal with XML as for Lisp there is
> s-expression. In essence, I think XML is s-expression for non-lispers.
> Do you disagree?  And as such, it seems worth doing for its own shape.

XML is much, much poorer than s-expressions, so IMHO care must be
taken.  Furthermore, mirroring your own words, I think the worth of
s-expressions doesn't necessarily lie in themselves, it lies more in
the fact that they are the standard representation of CL, and that
they are therefore part of a well-defined semantic framework.  XML is
missing this, by only providing the syntax for the tree-structure,
leaving the semantic framework, as well as the detailed syntax of
attribute values and PCDATA element contents, to applications.

> I recall some years back a conflict on the lisp machine about whether to
> represent variuos in-file data structures as "lispy" or "generic".  e.g.,
> DEFSYSTEM's version database.  It might be nice for file snapshotting for
> non-lisp tools to read the data.  But it's a pain to write parsers/printers.
> At least XML means you can do that once and not have to repeat it for each
> external tool's preferred data style.

That's not really true, IMHO.  You'll still have to write
parsers/printers (though that's a bit of a misnomer) that do the
application specific parsing on top of XML.  Using XML is really like
using CL s-exprs in a non-CL language in that way, only with the
leveller that all languages are non-XML languages now, and that there
is a small library that recognises the '('s and ')'s...

> > Indeed I kind of suspect that if any new and really useful mechanisms
> > are developed on top of XML, one of the first things that will happen
> > once they get more wide-spread acceptance will be to decouple them
> > from the underlying encoding, so that application/environment-specific
> > encodings can be substituted.
> 
> Certainly I'm not a fan of DOM-only internal representations if that's
> what this is saying.

No, what I meant was that any worthwhile protocols that are based on
XML will probably be decoupled at some stage from XML, thereby
allowing other, specialised encodings (like ASN.1+DER, or s-exprs, or
tight on-the-wire formats) to be used.

> > Most of what I've written in this posting seems to be obvious, and
> > indeed it should be.  Nevertheless, most of the uses of *ML I've seen
> > over the years haven't followed this simple path, but rather let *ML
> > seep into the application's core.
> 
> Do you mean they have let DOM become the data structure of choice?  If not,
> I'm missing your point here [obvious or not].

Exactly, either DOM, or something that is more or less similarly
low-level, like e.g. a simple 1:1 mapping of XML to Lisp lists is...

Just another note, since this all started out because I mentioned our
tree transformation approach, which used predicate dispatching:
Another reason why we dropped this approach was not specific to XML,
but rather due to practical problems with predicate dispatching
itself:  The only ordering relationship between methods in predicate
dispatching is that of implication.  That is problematic, because it
doesn't always give a total ordering for all applicable methods, so
you have to invent an additional ordering relationship
(e.g. user-supplied priorities, ...), but that is just some arbitrary
measure, and hence doesn't feel right...

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Kent M Pitman
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <sfw7l0mav0o.fsf@world.std.com>
"Pierre R. Mai" <····@pmsf.de> writes:

> Kent M Pitman <······@world.std.com> writes:
>
> > Doesn't unix do precisely this.  That is, the system-wide paradigm is 
> > pipe.  Pipe externalizes to text and the next program re-internalizes to
> > memory, then outputs to text, so the next thing can parse back.  It would
> > seem inefficient, but it's relatively robust in spite of that.  About the
> > only thing one loses on are (a) speed [mostly something people seem willing
> > to pay form] and (b) that you have to write a lot of parsers [which xml
> > disposes of by adding structure to text files].
> 
> It doesn't dispose of writing the parsers, or at least it shouldn't.

I disagree here, but perhaps only slightly.

What I meant was it disposes of the "need" to write a parser.  My point 
exactly is that writing a parser is a heavyweight activity for a serious
user.  So if I was heavily processing a certain kind of data all the time,
I could afford the cost of doing it right.  But it's the lightweight user
who is utterly screwed by present day situations because they can't borrow
data stored in complex formats for trivial uses.  By putting the stuff in
XML, it is accessible to people with a momentary need, even if in a slightly
crude form that wouldn't stand up under heavy strain.  Compare to other,
proprietary formats, where it wouldn't be accessible at all!

> Just because XML adds a little standardised structure to the external
> format, that doesn't mean that the internal format should be an
> isomorphic mapping of that external structure, except under very
> special circumstances.

I'm not quibbling over "should" but over "can".  When we talk should, we're
talking long-term investment by big guys, who can afford to do things the
right way.  The big guys will never be crowded out of the market, but the
small guy will easily be by a lot of special purpose formats.

> Any program that does something remotely useful with the data it gets
> via XML, will have to reinterpret that data using syntactic and
> semantic knowledge that is not present in the XML data itself.

For various values of "remotely useful".  I have lots of silly little scripts
I write in my personal bin directory on Unix that do things by cascading
several calls to grep and sed.  They are not efficient, and I would never
sell them to you.  But they are useful and I would never have invested the
time/money to make them work "right" by any reasonable definition of right.

In effect, IMO, XML very much supports rapid prototyping.  And,
incidentally, Lisp read/print does likewise and for very similar
reasons.  Mostly I don't do any "serious" representation of data in 
s-expression format at all (well, maybe alists or plists, but even then...)
But the ability to have these pre-consed (pardon the meta-pun) kinds of data
available to slap programs together out of when experimenting is utterly
critical to my programming style.

> But it doesn't encode the fact that something is a boolean, byte or
> whatever.  It only really knows about (P)CDATA, i.e. strings.  

Yep.  Limitations.  Same could be said about s-expressions vs other data
types.  Hard to dispatch on, for example.  But c'est la vie.  Still
sometimes handier than doing it right.

> Working directly on a 1:1 mapping of XML is more like treating
> 
> #S(DEMO :KEY1 VALUE1 :KEY2 VALUE2)
> 
> not as a structure of type demo with fields key1 and key2, but as a
> list, whose first element encodes the structure type, whose fields
> are encoded by an p-list with strings (or sub-lists) as values.

You say this but I hear myself saying (even treating this as a list would
be a big step up in Java over the alternative). If the price of getting
list read/print into java is putting it in xml...

> I agree that having a 1:1 mapping from XML to some simplistic
> list-based format is a nice thing to have, in that it is one of two
> useful interfaces to an XML parser (the other being event-based).  It
> can also help you to do simplistic transforms easily (similarly to sed
> or perl), and it is a useful stepping stone for writing the parsers
> that transform this into more useful internal representations.

Yes, this just was what I was saying above.  I just think this is
quite a major point.

> Where I get reservations is when one then goes on to create
> specialised mechanisms for working directly with this 1:1 mapping.  If
> this isn't carefully balanced with other language features and tools,
> my experience is that one falls into the same trap that IMHO perl has
> fallen into, by centering too much on regular expressions (which are
> very, very powerful in Perl, whereas other features are quite a bit
> less powerful, and more verbose), thereby effectively driving people
> to work with constructs that are IMHO too low-level.

Well, fortunately, the representational size of a general rep is probably
keeping this from really happening.  I don't know too many people who want
to use the default DOM for their ordinary data, just because of the storage
overhead and complexity of the general tools.

> > I've often said: the key to intelligent behavior isn't picking a right
> > representation, but having a repertoire and knowing when to shift.
> > Each has its virtues.  And having a general purpose tree transformation
> > facility seems an important arrow for the quiver...
> 
> My current impression is that a facility that eases transforming the
> 1:1 mapping into specialised internal representations is the more
> important tool.  Only or mostly providing the GP tree transformation
> arrow is like giving your knights the best armours, when the long-bow
> has already been invented for centuries...

I think I basically agree with this.
 
> > > And that's the reason why I've come to the conclusion that the
> > > predicate-dispatching (with user-extensible patterns as predicates)
> > > sub-language approach is flawed:  It indirectly encourages staying
> > > close to the XML data-structures, because it becomes so easy to work
> > > with them.  In that sense it is pretty similar to regular expressions:
> > > Since they make the wrong approach so easy and cheap to use (at
> > > first), they encourage working this way, until it is too late, and the
> > > damage (to both the project, and the mind of the programmer) has been
> > > done.
> > 
> > This is certainly an interesting claim.  It has a realistic sound to it,
> > though I've not seen it happen a lot in practice.  Do you think this happens
> 
> One example would be Perl and regular expressions.
> 
> > with Lisp and, say, standard-object? (I think the answer is "yes".)  If
> 
> I'm certain that people sometimes rely on standard-class where a
> specialised meta-class might be more appropriate. But...

I do this myself, but for portability.
 
> > so, do you think it's a problem? (I think the answer is "no.")
> 
> I agree that it probably isn't a problem, or at least not a huge
> problem.  But I think that this is for a number of reasons, which
> don't apply to XML automatically:
> 
> - standard-class and its facilities are much higher-level than 1:1
>   XML, therefore the damage done by remaining at this level is
>   probably less serious.
> 
> - CL (with the addition of the MOP) provide similarly powerful tools
>   for working with non-standard-class.
> 
> - Going down the ladder (to a level that is IMHO more comparable with
>   XML), while CL provides good support for working with lists and
>   property lists, the support for working with structures and objects
>   is at least as good.
> 
> - Furthermore CL even provides facilities for treating lists as
>   structures (or vice-versa), thereby allowing easy transitions
>   between high- and low-level representations.
> 
> - CL doesn't provide a specialised transformation facility for lists
>   which doesn't apply to standard-class (or vice-versa).

So you're suggesting this as a meta-model for how to approach an appropriate
set of XML tools, then?  (I'll  think on this.)
 
> To quote yourself, I agree that it is the ecological landscape of a
> language that is important here, the balance between features and their
> interactions, and not the presence or absence of a particular feature
> in itself.

Quoting me seems an unfair way of making sure I don't reply negatively. ;-)
 
> So what I'm really rebelling against is IMHO a strong imbalance in
> most current approaches to working with XML, which concentrate too
> much on the low-level representation, thereby creating tendencies to
> stay at that level long after it is useful.  Building ever more
> specialised low-level transformation facilities IMHO just increases
> this imbalance, rather than decreasing it, at least if one isn't very
> careful about integration with the rest of the language.

I think I get the subtlety of what you're going after; I just don't think
it makes the gp representation bad per se.  (Maybe you don't either.)
I'm glad you took the time to elaborate on your concern, which does seem
a legitimate one.  It's so hard to even have discussions on complex 
interactions like you're addressing here because the foundational parts
of the argument are often subject of tiny disputes--but the disputes 
themselves don't really affect the claims you're making and in effect you
end up having to make the claims you're making on the basis of a kind of
wobbly foundation.  Even so, I think you've done a good job of it, and
it's given me some very interesting stuff to think about, all my minor
quibbles above notwithstanding.

> > > IMHO the only sensible way of dealing with *ML (and especially with
> > > XML) is to handle them the orthodox way, by treating them just like
> > > any other application-specific external representation.  Parse them,
> > > using any applicable domain knowledge, turn them into an
> > > application-specific internal representation, and forget that they
> > > ever had anything to do with *ML.  The same way for the other
> > > direction.  Especially forget about DTDs, etc.
> > 
> > To an extent, I don't disagree with this.  I guess I just think it's like
> > Lisp read/print.  Yes, when you read something in, you might do other things
> 
> But XML parsing/printing _in itself_ does much less than read/print.

Can you elaborate on that point?

> > internally in your programs.  That is, not all Lisp is lists just because
> > programs are lists.  Yet, it's still useful to have lists as a tool for
> > those programs that want them.  I had the impression you were saying that
> > it was bad to do tree transformation in the first message.  Maybe you are
> 
> Not tree transformations itself.  I just consider it bad to provide a
> tree-transformation facility that works only/best with the 1:1 XML
> representation.

You're saying this just because you think it "too easy" and because it
leads people into what amounts to a "hill-climbing problem" in the wrong
direction?  I still find this a troubling claim, but you build a good case
and I'll have to think more on it.
 
> > saying that or maybe you are only saying it's bad to rely on
> > general purpose structures as your only option.  There's a
> > difference between making sure
> 
> general purpose nested lists with only strings as leaves.

Well, no.  You can also have empty elemnts.  <true/> or 
<fixnum value="1234" />.
 
> > everything implements the PRINT-XML generic function and saying that 
> > everything is implemented as a subclass of XML-CLASS.
> 
> Indeed.  I have no problem with application-specific PRINT-XML or
> READ-FROM-XML methods, since they will be working with high-level
> objects, turning them into XML, or parsing them from XML, thereby
> leaving the restrictive structure of XML behind.  And I do think that
> tools that assist in writing such methods are valuable additions.
> 
> What I do criticise is working with instances of XML-CLASS
> (you can't really subclass XML-CLASS without embedding
> application-specific knowledge) all over the place.

Right.  Well, and I have the vaguest feeling that if we're not careful
it would also create the possibility of replicating the Integer/int problem
of Java--two related data types in unrelated parts of the type tree.

I prefer to thin kof it as XML-CONTAINER, not XML-CLASS.  I don't know if
that distinction means much to you.  But I think every object, even XML
objects, deserve some class representation.

> > > And XML is
> > > quickly headed the same way, through the layered standards like
> > > XMLNS), for which something like SAX is totally sufficient.
> > 
> > Is XMLNS the namespace system?  If so, I agree that's a mess. It's like
> 
> Yes.
> 
> > CL symbols, only without symbol sharing.  What a disaster that is.  But
> > that's the main problem I've seen with it--have you seen other problems
> > as well?
> 
> Not really a parsing problem, but the scoping rules of namespaces
> don't mesh all that well with the scoping rules of most programming
> languages in my experience.  XMLNS in effect allows you to "create"
> and reference different namespaces through the same prefixes in nested
> structures, e.g.: 
> 
> <abc:foo xmlns:abc="bla">
>   <abc:foo xmlns:abc="blub">
>     <def:foo xmlns:def="bla"/>
>   </abc:foo>
> </abc:foo>
> 
> which fully resolved would be
> 
> <bla#foo><blub#foo><bla#foo></bla#foo></blub#foo></bla#foo>

Yep.  But doesn't it also let you create two distinct symbols that mean
the same thing, forcing you to do resolution to tell?  That's different
than Lisp as well.  (means you can't just do symbol processing in isolation.)

> > I'd like to see it have PRINT-XML methods .. and some way to establish 
> > xml-external to lisp-internal mappings .. and perhaps *PRINT-XML-READABLY*.
> 
> I'd actually agree with all of those, though I wouldn't call that
> embracing XML,

Ah, so we're just talking the semantics of the words used in the brochures...
;-)

> and I think that luckily CL is powerful enough to let
> users add this kind of support.  Especially the second point is
> something that is what I want:  Ways to transform XML (either external
> or via XML-internal) into application objects.  This is far preferable
> to letting users work on XML directly.  They usually don't work on
> e.g. ASN.1 directly, either.
> 
> > I've gotten a book on SOAP but never slogged through it.  I don't know why
> > people that make this stuff can never just give a capsule summary and why
> > I have to read huge big books.  Can you summarize how SOAP works in some
> > way that is both brief and conceptually productive in terms of understanding
> > what it's really doing?
> 
> SOAP (descended from XML-RPC) is just another RPC mechanism, together
> with certain data-marshalling rules, using XML as its low-level on
> the wire protocol framework.  In fact by adding those data-marshalling
> rules, they add some of the parts that XML is missing vis-a-vis s-exprs.
> 
> The interesting thing about SOAP is that it might be more practicable
> to deploy on the Internet than e.g. CORBA still is.  In that way it
> might entice service providers to publish useful programatic
> interfaces to their services, which can then be easily tapped from
> languages like CL.  At least that's the idea (there are certain other
> uses of SOAP looming on the horizon, if one believes those backing
> SOAP).
> 
> Of course CORBA promised similar things, and SOAP still is at the
> stage that CORBA was at beginning of the 90s...
> 
> > I guess I think it's the same virtue to deal with XML as for Lisp there is
> > s-expression. In essence, I think XML is s-expression for non-lispers.
> > Do you disagree?  And as such, it seems worth doing for its own shape.
> 
> XML is much, much poorer than s-expressions, so IMHO care must be
> taken.  Furthermore, mirroring your own words,
[darn]
> I think the worth of
> s-expressions doesn't necessarily lie in themselves, it lies more in
> the fact that they are the standard representation of CL, and that
> they are therefore part of a well-defined semantic framework.  XML is
> missing this, by only providing the syntax for the tree-structure,
> leaving the semantic framework, as well as the detailed syntax of
> attribute values and PCDATA element contents, to applications.

Good point (whoever made it).  I've often said this of XML/SGML vs HTML
(where HTML does have the Lispy character), but I've never linked those
two observations (the lisp one and the *ML one). Hmm.
 
> > I recall some years back a conflict on the lisp machine about whether to
> > represent variuos in-file data structures as "lispy" or "generic".  e.g.,
> > DEFSYSTEM's version database.  It might be nice for file snapshotting for
> > non-lisp tools to read the data.  But it's a pain to write parsers/printers.
> > At least XML means you can do that once and not have to repeat it for each
> > external tool's preferred data style.
> 
> That's not really true, IMHO.  You'll still have to write
> parsers/printers (though that's a bit of a misnomer) that do the
> application specific parsing on top of XML.  Using XML is really like
> using CL s-exprs in a non-CL language in that way, only with the
> leveller that all languages are non-XML languages now, and that there
> is a small library that recognises the '('s and ')'s...

Maybe what you're saying is that the choice of primitives  to write your
parser from is in one case primitives that do I/O and in the other
primtiives that do XCAR and XCDR.
 
> > > Indeed I kind of suspect that if any new and really useful mechanisms
> > > are developed on top of XML, one of the first things that will happen
> > > once they get more wide-spread acceptance will be to decouple them
> > > from the underlying encoding, so that application/environment-specific
> > > encodings can be substituted.
> > 
> > Certainly I'm not a fan of DOM-only internal representations if that's
> > what this is saying.
> 
> No, what I meant was that any worthwhile protocols that are based on
> XML will probably be decoupled at some stage from XML, thereby
> allowing other, specialised encodings (like ASN.1+DER, or s-exprs, or
> tight on-the-wire formats) to be used.

I see the logic in this but I fear that's overly ambitious/optimistic
so I've set my sights shorter on what public opinion can be pushed toward.
Dunno if that's good or bad of me.
 
> > > Most of what I've written in this posting seems to be obvious, and
> > > indeed it should be.  Nevertheless, most of the uses of *ML I've seen
> > > over the years haven't followed this simple path, but rather let *ML
> > > seep into the application's core.
> > 
> > Do you mean they have let DOM become the data structure of choice?  If not,
> > I'm missing your point here [obvious or not].
> 
> Exactly, either DOM, or something that is more or less similarly
> low-level, like e.g. a simple 1:1 mapping of XML to Lisp lists is...
> 
> Just another note, since this all started out because I mentioned our
> tree transformation approach, which used predicate dispatching:
> Another reason why we dropped this approach was not specific to XML,
> but rather due to practical problems with predicate dispatching
> itself:  The only ordering relationship between methods in predicate
> dispatching is that of implication.  That is problematic, because it
> doesn't always give a total ordering for all applicable methods, so
> you have to invent an additional ordering relationship
> (e.g. user-supplied priorities, ...), but that is just some arbitrary
> measure, and hence doesn't feel right...
> 
> Regs, Pierre.

Thanks for the response!
From: Boris Schaefer
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <87vgo36eyc.fsf@qiwi.uncommon-sense.net>
Kent M Pitman <······@world.std.com> writes:

| Pierre R. Mai <····@pmsf.de> writes:
| 
| > But XML parsing/printing _in itself_ does much less than read/print.
| 
| Can you elaborate on that point?
| 
| [...]
| 
| Well, no.  You can also have empty elemnts.  <true/> or 
| <fixnum value="1234" />.

Or <fixnum>1234</fixnum>, or <fixnum value="I am not a fixnum">, or
<fixnum>I am not a fixnum</fixnum>.  Point being, you have to parse
the contents, to see if they match your expectations.  This is
something you will not have to do in Lisp.

Furthermore, I think it was something that Tim Bradshaw said some time
ago that made me realize, that in XML you will sometimes have to turn
attributes into contents, because attributes don't have structure.

Basically, I believe the distinction between attributes and contents
is flawed.  It's just as if you say, all but the last argument to a
function call have to be strings.

Boris

-- 
·····@uncommon-sense.net - <http://www.uncommon-sense.net/>

The hardest part of climbing the ladder of success is getting through
the crowd at the bottom.
From: Kent M Pitman
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <sfwae5f4umz.fsf@world.std.com>
Boris Schaefer <·····@uncommon-sense.net> writes:

> 
> Kent M Pitman <······@world.std.com> writes:
> 
> | Pierre R. Mai <····@pmsf.de> writes:
> | 
> | > But XML parsing/printing _in itself_ does much less than read/print.
> | 
> | Can you elaborate on that point?
> | 
> | [...]
> | 
> | Well, no.  You can also have empty elemnts.  <true/> or 
> | <fixnum value="1234" />.
> 
> Or <fixnum>1234</fixnum>, or <fixnum value="I am not a fixnum">, or
> <fixnum>I am not a fixnum</fixnum>.  Point being, you have to parse
> the contents, to see if they match your expectations.  This is
> something you will not have to do in Lisp.

Yes and no.  In Lisp you can still write expressions like (QUOTE A B), too.
(The question of detecting an ill-formed expression is logically different
than the question of coercion, and can still require some "parsing" even
in Lisp.)  I grant you XML has more of this, but Lisp is not clean on this.

Further, you're assuming that when I write 123.34e1.3e4.5 that I mean a 
symbol, or that when I write 1234 I meant a number and not a symbol, for that
matter.  Sure, Lisp's parser will give me one or the other, but that doesn't
mean it detected all errors of intent (including typos).  And XML, through
more redundancy of expression, gives you some recoverability that Lisp doesn't.
For example, I think <fixnum value="four" /> is more likely to mean 4 than
FOUR and might be correctable while still in parse phase, while FOUR in Lisp
isn't even open for debate and has to drift through to the application phase
before being noticed as improper data.  By that point, it might have taken
several wrong turns in data processing and the module doing the error checking
might not be the relevant one to be able to fix and proceed he error.

> Furthermore, I think it was something that Tim Bradshaw said some time
> ago that made me realize, that in XML you will sometimes have to turn
> attributes into contents, because attributes don't have structure.

Yes, this is so, but...
 
> Basically, I believe the distinction between attributes and contents
> is flawed.  It's just as if you say, all but the last argument to a
> function call have to be strings.

I'm not totally in disagreement with you, although (a) most languages
(including Lisp) do make a strong distinction between container
classes (might contain pointers) and raw data classes (which don't,
and can be more compactly represented), and (b) it isn't "last" argument
since there is no ordering, and moreover it isn't just "one" argument
since the content can contain several items, so it's not as limiting as
all that.  <foo a="3" c="4"><b><stuff /></b><d><morestuff /></d></foo>

While at my previous employer, Crystaliz, I wrote a Java class to help
make it easier to use XML and which would automatically create ways to 
express classes in XML by rewriting the primitives types (int, float, etc.)
and certain non-primitive types that didn't require pointers (e.g., dates)
into attributes "automagically", and which used the contents for representing
those slots that had container class values.

I do agree with you though that the stuff SGML had (and which XML stole
some of, I think) where certain attribute values can be things like string
lists of values, etc., trying to force structure in where it didn't fit,
is annoying.

I also agree that the language is underconstrained in telling you whether
to use an attribute or a body element when either will do, though I have
come to think that good style is to use an attribute except where either
mixed or recursive content is possible.  This resolves at least that
ambiguity of possibility in a way that programmers not willing to become
career philosophers can deal with productively.

Look, I don't want to sell XML as "all the right design choices".  It has
heaps of things I don't like.  (The whole DTD thing is so much of a glaring
problem to me, being a huge part of the spec and something a large part of
the community just ignores..)  But it has one thing I really like: huge
acceptance in service of a problem other languages have long had: the absence
of read/print.  I'm willing to trade a lot of details to get even a start on
a well-accepted solution (with admittedly more emphasis on "well-accepted"
than "solution").  Evolution toward something totally winning comes in time,
not overnight.  Look back at some of the earlier Lisp dialects and you'll
see stuff far worse than this (and  you'll start to  understand why the rest
of the world imagines we could never have crawled free of the limitations
we built for ourselves--just as some seem here to be assuming the XML
world can't).
From: Michael L. Rilee
Subject: Early lisp dialects, idiosyncracies thereof (was Re: XML transformation  (was: Re: Language intolerance ...))
Date: 
Message-ID: <3ADC6BFA.E767EE32@gsfc.nasa.gov>
* Kent M Pitman wrote:
> Look back at some of the earlier Lisp dialects and you'll
> see stuff far worse than this (and  you'll start to  understand why the rest
> of the world imagines we could never have crawled free of the limitations
> we built for ourselves--just as some seem here to be assuming the XML
> world can't).

How about describing your choices for the top three such limitations? 
Not
to start anything polemical, but I am curious as to why these
limitations 
were such that the "rest of the world" saw them as insurmountable.  And
after
that, how were these limitations surmounted, if indeed they have been.

-- 
Mike Rilee Emergent IT/NASA/GSFC Mlstp 931, B28/S207 Greenbelt, MD 20771
·················@gsfc.nasa.gov Ph. (301)286-4743/-4101 Fx.
(301)286-1634
Computing in Sun-Earth Connections: http://lep694.gsfc.nasa.gov/rilee
From: Kent M Pitman
Subject: Re: Early lisp dialects, idiosyncracies thereof (was Re: XML transformation   (was: Re: Language intolerance ...))
Date: 
Message-ID: <sfwn19fpis6.fsf@world.std.com>
"Michael L. Rilee" <·················@gsfc.nasa.gov> writes:

> * Kent M Pitman wrote:
> > Look back at some of the earlier Lisp dialects and you'll see
> > stuff far worse than this (and you'll start to understand why the
> > rest of the world imagines we could never have crawled free of the
> > limitations we built for ourselves--just as some seem here to be
> > assuming the XML world can't).
> 
> How about describing your choices for the top three such
> limitations?  Not to start anything polemical, but I am curious as
> to why these limitations were such that the "rest of the world" saw
> them as insurmountable.  And after that, how were these limitations
> surmounted, if indeed they have been.

Certainly "interpreted-only" was a property of some early implementations
of Lisp (even though compilation has been an adjunct of SOME Lisp dialects
going back to the beginning, it wasn't a property of all) that seemed
insurmountable to a great many people.  It gave Lisp a reputation for being
slow because in those days people didn't understand that the language 
definition could be separated from the implementation in a way that allowed
compilation to function on what had an interpreted definition.  They thought
interpretation was part of the definition.

Early lisps also generally had no string data type and often no array data
type.  This is why Lisp is often taught only with symbols and lists, partly
because those types are novel to other languages, and partly because some
instructors are oblivious to changes in the language leading to the addition
of such types.  In the late 1970's, when I started with Lisp, we were still
using MACLISP, which had arrays but no strings.  One used '|abc| as a string
and one used (exploden '|abc|) => (97 98 99) to find out what was "in" the
"string".  It's easy to understand why anyone who saw Lisp back then didn't
take it seriously as a general purpose language.

And Lisp was dynamically scoped, which was a semantic analysis disaster.
That the community could switch from dynamic to lexical scoping in a way
that was accepted and even tolerated legacy programs was a pretty amazing 
feat.  But one might rightly have said Lisp was on a dead-end path with just
dynamic scoping.

I could probably go on and on with a bit more thought.  But my point is
that it sometimes takes a cycle or two of language design to get out of
these ruts.  But we do grow.  And what survives is the community and the
ideas.  I think XML will as well.  Indeed, XML is to many the survival and
even blossoming of the ideas of SGML, which is really quite radically 
different.  One might say they've undergone at least one evolutionary
step in getting as far as XML for the sake of survival.  The next best
thing they could do, IMO, is to drop all the DTD stuff and become about 
a six page spec (the part everyone implements anyway), dropping the part
that only the rich companies implement and that is arguably too weird for
people to want.  (The DTD stuff, if it ever survives/returns should be
written in standard XML syntax, not in a special syntax requiring a
special parser, IMO... But at least XML is down to needing only two parsers
instead of SGML's three...)
From: Chris Riesbeck
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <riesbeck-DDAEE1.13261817042001@news.acns.nwu.edu>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

>I also agree that the language is underconstrained in telling you whether
>to use an attribute or a body element when either will do, though I have
>come to think that good style is to use an attribute except where either
>mixed or recursive content is possible.  

mixed, recursive, or repeated, e.g., 

 <book author="john" author="mary" />

is illegal, you have to write

 <book>
   <author>john</author>
   <author>mary</author>
 </book>
From: Kent M Pitman
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <sfw3db7nx48.fsf@world.std.com>
Chris Riesbeck <········@ils.nwu.edu> writes:

> In article <···············@world.std.com>, Kent M Pitman 
> <······@world.std.com> wrote:
> 
> >I also agree that the language is underconstrained in telling you whether
> >to use an attribute or a body element when either will do, though I have
> >come to think that good style is to use an attribute except where either
> >mixed or recursive content is possible.  
> 
> mixed, recursive, or repeated, e.g., 
> 
>  <book author="john" author="mary" />
> 
> is illegal, you have to write
> 
>  <book>
>    <author>john</author>
>    <author>mary</author>
>  </book>

Indeed, but this doesn't come up for slot names since we don't allow
more than one foo.bar either.

This is one of XML's other questionable features, though--the conflation
of markup=structure with markup=type.

 <book>
   <author>john</author>
   <author>john</author>
 </book>

is permissible only if BOOK is a container and AUTHOR is a type of contained
element.  That's why a <body> can contain more than one <p> in HTML.  But
AUTHOR can occur only once if BOOK is a structure and AUTHOR is a slot-name
in that structure, even in XML (assuming you've told the DTD).
From: Chris Riesbeck
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <riesbeck-52D993.15043718042001@news.acns.nwu.edu>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

>Chris Riesbeck <········@ils.nwu.edu> writes:
>
>This is one of XML's other questionable features, though--the conflation
>of markup=structure with markup=type.
>
> <book>
>   <author>john</author>
>   <author>john</author>
> </book>
>
>is permissible only if BOOK is a container and AUTHOR is a type of contained
>element.  That's why a <body> can contain more than one <p> in HTML.  But
>AUTHOR can occur only once if BOOK is a structure and AUTHOR is a slot-name
>in that structure, even in XML (assuming you've told the DTD).

??? I may be missing something in my readings, but I don't
believe XML recognizes any distinction between structure and 
slot-name, and DTD's don't say so either. DTD's just say
what tags can go inside what tags. If anything might tell
you, it's an RFD, I think, but even then I'm not sure.

As an example of the inconsistencies you have to deal with
in data binding, suppose you have a LOCATION class and a 
publisher has a location. Then perfectly normal XML would
be

  <publisher>
    <location>...</location>
  </publisher>

where this is supposed to set the LOCATION slot of
a PUBLISHER with a LOCATION. There are severa Java
data-binders that do just that.

But if a publisher has a main office and local office, someone
would've written

  <publisher>
    <homeOffice>
      <location>...</location>
    </homeOffice>
    <mainOffice>
      <location>...</location>
    </mainOffice>
  </publisher>

Now LOCATION is just a structure maker.

If you, the programmer, INSIST on consistency, then XML
for the first case would have to be

  <publisher>
    <location>
      <location> ... </location>
    </location>
  </publisher>

which only us programmers could love.

This is what makes data-binding XML to class instances
so, um, interesting. 

By the way, he multiple author case is handled by many 
Javabean-based data-binders, as long as AUTHOR is
known to be what's called an "indexed property" of BOOK,
i.e., a BOOK has N authors, stored, conceptually at least,
like an array.
From: Chris Riesbeck
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <riesbeck-BCB4F9.13394117042001@news.acns.nwu.edu>
In article <···············@world.std.com>, Kent M Pitman 
<······@world.std.com> wrote:

>While at my previous employer, Crystaliz, I wrote a Java class to help
>make it easier to use XML and which would automatically create ways to 
>express classes in XML by rewriting the primitives types (int, float, etc.)
>and certain non-primitive types that didn't require pointers (e.g., dates)
>into attributes "automagically", and which used the contents for representing
>those slots that had container class values.

There's a not unsurprising amount of work going on making
Java <-> XML transformers. The official Sun name is "data binding"
and some examples I know of for Java are JOX, JSX, JQuick, and Zeus.

As always, there's a big tension between simplest for coding
(JSX can dump and read almost any serializable Java instance),
most flexible (JQuick may win here) in how you define the
mapping, and simplest XML output.
From: Boris Schaefer
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <87wv8jnsx6.fsf@qiwi.uncommon-sense.net>
Kent M Pitman <······@world.std.com> writes:

| Yes and no.  In Lisp you can still write expressions like (QUOTE A
| B), too.  (The question of detecting an ill-formed expression is
| logically different than the question of coercion, and can still
| require some "parsing" even in Lisp.)  I grant you XML has more of
| this, but Lisp is not clean on this.

Well, I just tried (QUOTE 1 2) in ACL6.0 Trial and it yielded 1, which
surprised me, so I checked the HyperSpec.  In the documentation of
QUOTE I find no indication that it should accept more than one
argument.  Is there any reason that (QUOTE 1 2) is well-formed (and is
there any info on this in the HyperSpec, because I didn't find it, or
did I misunderstand the description of QUOTE and it doesn't have to
signal PROGRAM-ERROR, when SAFETY is 3)?

| Further, you're assuming that when I write 123.34e1.3e4.5 that I
| mean a symbol, or that when I write 1234 I meant a number and not a
| symbol, for that matter.  Sure, Lisp's parser will give me one or
| the other, but that doesn't mean it detected all errors of intent
| (including typos).

I'm not really talking about what you meant to write.  I am saying
that if you _want_ to write a number, you can.  I'm not saying that
the Lisp syntax appeals to you.  In XML, you cannot know, whether what
you wrote is a number or not unless you know the application that is
reading what you wrote.

| And XML, through more redundancy of expression, gives you some
| recoverability that Lisp doesn't.  For example, I think <fixnum
| value="four" /> is more likely to mean 4 than FOUR and might be
| correctable while still in parse phase, while FOUR in Lisp isn't
| even open for debate and has to drift through to the application
| phase before being noticed as improper data.  [...]

To paraphrase Tom Duff: I'm sure the above forms some sort of argument
in this debate, but I'm not sure whether it's for or against.

| > Basically, I believe the distinction between attributes and
| > contents is flawed.  It's just as if you say, all but the last
| > argument to a function call have to be strings.
| 
| I'm not totally in disagreement with you, although (a) most
| languages (including Lisp) do make a strong distinction between
| container classes (might contain pointers) and raw data classes
| (which don't, and can be more compactly represented),

I'm not sure what the difference between container and raw data
classes has to do with the fact that XML basically only allows keyword
args of type STRING.  I don't think I understand what you mean here.

| and (b) it isn't "last" argument since there is no ordering, and
| moreover it isn't just "one" argument since the content can contain
| several items, so it's not as limiting as all that.  <foo a="3"
| c="4"><b><stuff /></b><d><morestuff /></d></foo>

Well, there's no ordering, but I sometimes like to think of XML as
structured like this:

  <foo a="3" c="4"><b><stuff/></b><d><morestuff/></d></foo>

  (foo :a "3" :c "4"
       :contents (list (b :contents (list (stuff)))
                       (d :contents (list (morestuff)))))

This kind of turns the contents of a tag into a single keyword arg
named CONTENTS.  Since in XML the contents are physically after the
attributes I think it's appropriate to say that the contents are a
single and last argument (even if that is far from technically
accurate).

Further, I think the distinction between attributes and content is
less than helpful, because at least to me it suggests different levels
of importance, content being more important than attributes.  This is
probably even intended, but to me it seems just like saying that some
arguments in a function call are more important than others.  This
might even be true in some cases, but it is hardly true in general.

| I also agree that the language is underconstrained in telling you
| whether to use an attribute or a body element when either will do,
| though I have come to think that good style is to use an attribute
| except where either mixed or recursive content is possible.  This
| resolves at least that ambiguity of possibility in a way that
| programmers not willing to become career philosophers can deal with
| productively.

I don't know, whether this is so good.  The problem with attributes is
that in my experience they don't scale.  It happened to me a few times
that I started out with an attribute, and later had to change it to
contents, because the representation became more complex.  So after a
few of these hassles, I just stopped using attributes.  I'm not too
thrilled about that either, but it's been some time since I've last
used XML and so it's currently not bothering me ;-)

| Evolution toward something totally winning comes in time, not
| overnight.  Look back at some of the earlier Lisp dialects and
| you'll see stuff far worse than this (and you'll start to understand
| why the rest of the world imagines we could never have crawled free
| of the limitations we built for ourselves--just as some seem here to
| be assuming the XML world can't).

Not to say that XML won't be good some day, but I'm not sure that
embracing XML now has any obvious benefit except the name.  (Not
counting the name as an "obvious benefit" probably proves that I could
never make a career in management.)

Boris

-- 
·····@uncommon-sense.net - <http://www.uncommon-sense.net/>

You never know how many friends you have until you rent a house on the beach.
From: Boris Schaefer
Subject: Re: XML transformation (was: Re: Language intolerance ...)
Date: 
Message-ID: <87snj7nsjf.fsf@qiwi.uncommon-sense.net>
Boris Schaefer <·····@uncommon-sense.net> writes:

| Is there any reason that (QUOTE 1 2) is well-formed [...]
                                          ^^^^^^^^^^^

Of course it is well-formed.  What I meant is:  why isn't a conforming
implementation required to signal a PROGRAM-ERROR when this is
evaluated with SAFETY 3?

Boris

-- 
·····@uncommon-sense.net - <http://www.uncommon-sense.net/>

When the ax entered the forest, the trees said, "The handle is one of us!"
		-- Turkish proverb
From: Paolo Amoroso
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <KWqnOuQi5Aa05ll22c564sbRpA74@4ax.com>
On Wed, 07 Mar 2001 22:37:17 GMT, ········@esatclear.ie (Russell Wallace)
wrote:

> to use CLOS which already exists and works well. But what if you want
> to experiment with a new object system design you've thought of?
> Writing or modifying a Scheme implementation for the purpose is a
> reasonable thing to do.

Another option is to use the CLOS MOP.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Tim Bradshaw
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <ey3d7bsm1t0.fsf@cley.com>
* Russell Wallace wrote:

> Well, we start off by teaching Newton's laws of physics and the ball
> and stick model of chemistry rather than the more complex general
> laws, because they're simpler, therefore easier to deal with. One
> can't restrict oneself to the simple models forever, but they do have
> their uses.

I'm kind of not sure this is true.  Newton's laws (and Newtonian
gravity) aren't really *simpler* than special (general) relativity.
In fact they've really got a whole lot more concepts in there -- for
instance conservation of momentum and energy are separate things
which are kind of obviously related but not the same, whereas in
relativity they're just the same thing.

So I think that there are reasons to teach them the way they are
taught but that simplicity isn't really one of them.

On the other hand I think that making analogies from science to
programming language design is a bit dubious (though I have done it
many times).

--tim
From: Russell Wallace
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <3aa6b621.83885555@news.iol.ie>
On 07 Mar 2001 18:57:31 +0000, Erik Naggum <····@naggum.net> wrote:

>  I consider the main advantage the fact that I don't have to stop using
>  certain nouns because I have used the same spelling for some verbs or
>  vice versa.

*nod* Fair enough. I generally find I don't particularly want to use
the same names for functions and data, though I do want to use the
same names for _types_ and data, so that if I have an object of type
FOO, I can call it FOO instead of MY-FOO or whatever.

>  I would consider myself to have a
>  problem if I had to _invent_ new names or use some arbitrary conventions
>  to name various objects the same modulo the arbitrary noise.  I dislike
>  that in Ada, for instance, and I positively hate C++ used with Hungarian/
>  Microsoftian gobbledygook, which simply fakes a _lot_ of namespaces.

Yeah, I can't stand Hungarian notation either. What I've ended up
doing for my C++ code is adopting the Java convention where types are
named LikeThis and functions and data are named likeThis, which is a
fairly painless way to get a separate namespace for types.

-- 
"To summarize the summary of the summary: people are a problem."
···············@esatclear.ie
http://www.esatclear.ie/~rwallace
From: Pierre R. Mai
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <87u254xowo.fsf@orion.bln.pmsf.de>
········@esatclear.ie (Russell Wallace) writes:

> On 07 Mar 2001 18:57:31 +0000, Erik Naggum <····@naggum.net> wrote:
> 
> >  I consider the main advantage the fact that I don't have to stop using
> >  certain nouns because I have used the same spelling for some verbs or
> >  vice versa.
> 
> *nod* Fair enough. I generally find I don't particularly want to use
> the same names for functions and data, though I do want to use the
> same names for _types_ and data, so that if I have an object of type
> FOO, I can call it FOO instead of MY-FOO or whatever.

What about the infamous 'list' example:

(defun mogrify-my-list (list)
  (nconc (sort list #'<) (list 1 2 3)))

In Scheme you can't do this, so people will use 'lst' or 'a-list' or
whatever instead:

(define (mogrify-my-list lst)
  (nconc (sort lst #'<) (list 1 2 3)))

Dylan probably gets around this by naming their list function
make-list or something similar.  That works for a fairly large number
of things, but they still suffer from the class/variable clash, and
hence have adopted a user-enforced namespace for classes using their
<class-name> naming convention.

I don't think that letting the user handle problems that can easily be
handled at the language level is a valid design approach.  I'd prefer
a language with n language-defined namespaces over a language with 1
language-defined namespace and n user-defined namespaces any day.

In fact I think that CL has at least one namespace too few, namely one
that distinguishes between special/dynamic variables/bindings and
lexical variables/bindings, instead of relying on the *special-var*
convention.  Of course having a new namespace for special-variables is
a tad more complex/syntactically ugly than having one for
functions/variables (or those for blocks, types/classes, go-tags,
catch-tags, etc.), since you'd have to introduce extra syntax to
disambiguate, whereas you get the disambiguation for
function/variables basically for free with standard s-exps.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Dorai Sitaram
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <9889pi$m00$1@news.gte.com>
In article <··············@orion.bln.pmsf.de>,
Pierre R. Mai <····@acm.org> wrote:
>
>In fact I think that CL has at least one namespace too few, namely one
>that distinguishes between special/dynamic variables/bindings and
>lexical variables/bindings, instead of relying on the *special-var*
>convention.  Of course having a new namespace for special-variables is
>a tad more complex/syntactically ugly than having one for
>functions/variables (or those for blocks, types/classes, go-tags,
>catch-tags, etc.), since you'd have to introduce extra syntax to
>disambiguate, whereas you get the disambiguation for
>function/variables basically for free with standard s-exps.

If CL had a #* (along the lines of #') notation
for referring to special variables instead of trusting
users to use *...*, you could get the desired separate
namespace for specials at no syntactic cost.  Instead
of saying *special*, you'd say #*special.  No increase
in character count.

--d

ps: Yes I know #* is used for something else, I forget
what (bit-vectors?).  Insert <any-as-yet-unused-char>
for *. 
From: Dorai Sitaram
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <988iui$n1d$1@news.gte.com>
The context is Pierre Mai's wishing for a different
namespace for specials and then tempering his wish with
the quite reasonable worry that it would get
syntactically ugly.  I'm suggesting that it doesn't
have to get syntactically ugly.  Pierre: it's quite
possible that you might conclude after all to reject
the notion of a different namespace for specials, but
if so it would be for some other reasons and they would
be interesting to know.  


In article <················@naggum.net>, Erik Naggum  <····@naggum.net> wrote:
>
>  #'x returns (function x) when read, so the syntax is an irrelevant and
>  trivial issue,  If we wanted to, we could make * a reader macro that
>  returned something like (special *special*) or whatever.  No change to
>  source code at all.  And users would need to be trusted exactly the same
>  as before.  So we're left to ask: What makes the namespace?  If some
>  special syntax makes it, why not the syntactic _convention_?
From: Pierre R. Mai
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <87u2543t5x.fsf@orion.bln.pmsf.de>
····@goldshoe.gte.com (Dorai Sitaram) writes:

> The context is Pierre Mai's wishing for a different
> namespace for specials and then tempering his wish with
> the quite reasonable worry that it would get
> syntactically ugly.  I'm suggesting that it doesn't

The "worry" wasn't intended in an absolute sense, but only relative to
the "non-ugliness" of the functional namespace.  I was aware of the
work that went on ISLisp, and I think that if one were to go down that
road, this would indeed be the way to do it.  Note that there still is
some difference between the functional and the dynamic namespaces:
With the functional namespace, the accessor FUNCTION is only needed in
the "special" case (accessing a function in a non-function position),
whereas the accessor dynamic is needed for every access.

OTOH dynamic/special variables have so pervasive effects, that each
access probably should carry a special distinction (as witnessed by
the *convention*), so maybe that's just as well.

All in all _if I were designing a new CL_, I'd probably be in favour
of replacing the user-defined dynamic namespace with a language
defined dynamic namespace, if only to restrain newbies from shooting
themselves in the foot by defvarring x and then wondering why there
whole program behaves strangely.  Given that it's much too late in the
game for CL, all this talk is pure daydreaming, especially since I
value language stability much more than having the "right" or "cool"
feature of the season.

Anyway, I'd always prefer CL over a language that has a dynamic
namespace, but conflates function and variable namespaces.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Kent M Pitman
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <sfwn1awkttt.fsf@world.std.com>
Erik Naggum <····@naggum.net> writes:

> 
> * "Pierre R. Mai" <····@acm.org>
> > In fact I think that CL has at least one namespace too few, namely one
> > that distinguishes between special/dynamic variables/bindings and
> > lexical variables/bindings, instead of relying on the *special-var*
> > convention.
> 
>   FWIW, ISLISP introduced just this.  

Yep.

>    dynamic accesses the dynamic binding,
>   defdynamic establishes a top-level binding, and dynamic-let establishes a
>   local binding.

And (dynamic varname) accesses it.  You can't refer to them as a simple symbol.

>   In contrast, defglobal defines a top-level binding that
>   can be shadowed by a normal let.  ISLISP has four namespaces: variable,
>   dynamic, function, and class.
> 
>   At least in the draft I got for free, I never bought the final standard.

There was a public domain version of the final version which someone
recently observed to me has become unavailable at the Harlequin web
site.  I'll see if I can track down the public domain version and post
it at my web site.
From: Bruce Hoult
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <bruce-3E0CA2.00223201032001@news.nzl.ihugultra.co.nz>
In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
wrote:

> > I lurk in a fair number of comp.lang.* newsgroups, and this
> > one seems to me to have the most intolerance/smugness.
> 
>   Try comp.lang.dylan.  Talk about Common Lisp and prefix syntax.
>   Ask them why they dropped it.

I understand that it was intended to allow both the infix and prefix 
syntaxes to be used, at the user's discretion.  But then I guess even 
the extremely experienced Lisp people such as Scott Fahlman and David 
Moon found that actually an infix syntax wasn't so bad after all.  When 
difficulties arose in defining a macro facility that could be mapped 
mechaniscally between infix and prefix syntaxes they decided to drop the 
prefix one.

If you know a different history I'd be interested to hear it.

-- Bruce
From: Jason Trenouth
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <ivrp9t4fike8n8tq58faqdqtpd04m13ch9@4ax.com>
On Thu, 01 Mar 2001 00:22:32 +1300, Bruce Hoult <·····@hoult.org> wrote:

> In article <················@naggum.net>, Erik Naggum <····@naggum.net> 
> wrote:
> 
> > > I lurk in a fair number of comp.lang.* newsgroups, and this
> > > one seems to me to have the most intolerance/smugness.
> > 
> >   Try comp.lang.dylan.  Talk about Common Lisp and prefix syntax.
> >   Ask them why they dropped it.
> 
> I understand that it was intended to allow both the infix and prefix 
> syntaxes to be used, at the user's discretion.  But then I guess even 
> the extremely experienced Lisp people such as Scott Fahlman and David 
> Moon found that actually an infix syntax wasn't so bad after all.  When 
> difficulties arose in defining a macro facility that could be mapped 
> mechaniscally between infix and prefix syntaxes they decided to drop the 
> prefix one.

Ironically for this discussion, Dylan is more like (+ Scheme CLOS infix).
:-j

__Jason
From: Lieven Marchand
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <m38zmqputr.fsf@localhost.localdomain>
Bruce Hoult <·····@hoult.org> writes:

> I understand that it was intended to allow both the infix and prefix 
> syntaxes to be used, at the user's discretion.  But then I guess even 
> the extremely experienced Lisp people such as Scott Fahlman and David 
> Moon found that actually an infix syntax wasn't so bad after all.  When 
> difficulties arose in defining a macro facility that could be mapped 
> mechaniscally between infix and prefix syntaxes they decided to drop the 
> prefix one.
> 
> If you know a different history I'd be interested to hear it.

What a beautiful history that has given Dylan misfix syntax and a
overly complex and under powered macro system. But we've had that
discussion ;-)

-- 
Lieven Marchand <···@wyrd.be>
Gla�r ok reifr skyli gumna hverr, unz sinn b��r bana.
From: Bruce Hoult
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <bruce-3CE89F.12033401032001@news.nzl.ihugultra.co.nz>
In article <··············@localhost.localdomain>, Lieven Marchand 
<···@wyrd.be> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > I understand that it was intended to allow both the infix and prefix 
> > syntaxes to be used, at the user's discretion.  But then I guess even 
> > the extremely experienced Lisp people such as Scott Fahlman and David 
> > Moon found that actually an infix syntax wasn't so bad after all.  When 
> > difficulties arose in defining a macro facility that could be mapped 
> > mechaniscally between infix and prefix syntaxes they decided to drop 
> > the 
> > prefix one.
> > 
> > If you know a different history I'd be interested to hear it.
> 
> What a beautiful history that has given Dylan misfix syntax and a
> overly complex and under powered macro system. But we've had that
> discussion ;-)

Under-powered?  A strange thing to say about something that is 
(regrettably, IMHO) Turing-complete...

-- Bruce
From: Joe Marshall
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <itlucqgy.fsf@content-integrity.com>
Bruce Hoult <·····@hoult.org> writes:

> Under-powered?  A strange thing to say about something that is 
> (regrettably, IMHO) Turing-complete...

A lot of things are Turing-complete (for instance, a Turing machine).
That doesn't necessarily imply ease of programming.

sendmail config files are Turing complete, too.










--


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Bruce Hoult
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <bruce-0A20C3.14190301032001@news.nzl.ihugultra.co.nz>
In article <············@content-integrity.com>, Joe Marshall 
<···@content-integrity.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > Under-powered?  A strange thing to say about something that is 
> > (regrettably, IMHO) Turing-complete...
> 
> A lot of things are Turing-complete (for instance, a Turing machine).
> That doesn't necessarily imply ease of programming.
> 
> sendmail config files are Turing complete, too.

I don't believe I said or implied that powerful implies easy -- in fact 
I believe the opposite is often true.

In particular, I don't like a language having Turing-complete processes 
going on at compile-time.  This applies equally whether it is a 
self-contained sublanguage (such as Dylan macros or C++ templates) or 
the same language as you use for normal programming (as in CL procedural 
macros).  I think that macros are the appropriate tool for doing trivial 
syntatic rearrangements and that if you want to do something complex 
then the appropriate method is to use a tool that explicitly generates 
source code that the programmer can examine -- and debug.

-- Bruce
From: Rahul Jain
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <97kh8a$et$1@joe.rice.edu>
In article <···························@news.nzl.ihugultra.co.nz> on
<···························@news.nzl.ihugultra.co.nz>, "Bruce Hoult"
<·····@hoult.org> wrote:
> I think that macros are the appropriate tool for doing trivial  syntatic
> rearrangements and that if you want to do something complex  then the
> appropriate method is to use a tool that explicitly generates  source
> code that the programmer can examine -- and debug.

The code generated by macros can be examined and debugged, just like any
code one writes "normally". The difference is that CL macros are powerful
enough that one can do all the external tool-generated source generation
from inside CL. CL macros are also pervasive throughout the language, as
they are an easy way to implement multiple forms which are semantically
similar. I don't see the benefit of running a separate lisp VM
just to do macroexpansions and save them to disk.

BTW, do you use LOOP or SERIES or ITERATE or FORMAT at all? I think those
would qualify as doing "something complex".

-- 
-> -/-                       - Rahul Jain -                       -\- <-
-> -\- http://linux.rice.edu/~rahul -=- ·················@usa.net -/- <-
-> -/- "I never could get the hang of Thursdays." - HHGTTG by DNA -\- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
   Version 11.423.999.220020101.23.50110101.042
   (c)1996-2000, All rights reserved. Disclaimer available upon request.
From: Pierre R. Mai
Subject: On macro systems (was Re: Language intolerance)
Date: 
Message-ID: <87n1b5dc4m.fsf_-_@orion.bln.pmsf.de>
Bruce Hoult <·····@hoult.org> writes:

> I don't believe I said or implied that powerful implies easy -- in fact 
> I believe the opposite is often true.

Given that being turing-completeness is so easy to achieve (often even
by accident), I don't think the expressive power of a language can be
defined by examining what can be computed in the language, but
rather what can reasonably be expressed in the language by a human.

> In particular, I don't like a language having Turing-complete processes 
> going on at compile-time.  This applies equally whether it is a 
> self-contained sublanguage (such as Dylan macros or C++ templates) or 
> the same language as you use for normal programming (as in CL procedural 
> macros).  I think that macros are the appropriate tool for doing trivial 
> syntatic rearrangements and that if you want to do something complex 
> then the appropriate method is to use a tool that explicitly generates 
> source code that the programmer can examine -- and debug.

But you can examine the source code generated by CL macros, and you
can debug that stuff.  More importantly most implementations will even
try to give you both views simultaneously (the expanded and the
original source) when debugging, something that few external tools
achieve.  I don't see why making the tool external does
in any way improve the situation.  Having seen many tools that
generate source code for traditional languages, I've very often found
those tools to be much harder to debug, especially since their
transformation is often performed in one huge step, whereas the
expansion of macros is a step-by-step process, that is supported by
reasonable tools:  If you want to find out what a particular piece of
input code is transformed into, you normally have little external tool
support to find out, whereas in CL you just macroexpand that part.

Many of those external tools work as if the code was written like
this:

(defmacro do-this-in-augmented-language
  ;; Whole program/module/file
  )

Debugging such a thing is a nightmare.  While you could argue that
this is just the fault of the tool creator, I'd argue that this is
often the direct result of missing support for the tool creator:
Given the external nature of the tool, as well as missing parsers,
compiler hooks, etc. it is quite natural that the author will create a
one-pass 'compiler', rather than a well-structured, segmented and
traceable process.

So I can see (and have experienced) a whole set of disadvantages to
the external tool approach.  I'd be interested in hearing about the
disadvantages of the extendible compiler approach that CL's macro
system is part of.  Note that I, too, think that special
macro-languages (like those of Dylan, Scheme, and especially the C++
template system) often suffer from similar problems than the external
tool approaches.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Kent M Pitman
Subject: Re: On macro systems (was Re: Language intolerance)
Date: 
Message-ID: <sfwbsrlfx1j.fsf@world.std.com>
"Pierre R. Mai" <····@acm.org> writes:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > I don't believe I said or implied that powerful implies easy -- in fact 
> > I believe the opposite is often true.
> 
> Given that being turing-completeness is so easy to achieve (often even
> by accident), I don't think the expressive power of a language can be
> defined by examining what can be computed in the language, but
> rather what can reasonably be expressed in the language by a human.

I think that "turing power" is a near useless concept because of the problem
you cite.  I agree another term that is unrelated to turing needs to be
devised, and I agree that the term "expressive" should be used in this way,
not defined by computability but defined by something else more related
to practical reality.  It's underspecified even so, but I usually tend to
add "per unit time" in my mind when talking about expressiveness.

For example, one CAN write an object-oriented program in Fortran or C
[statement of turing equivalence],  but it takes longer than in Lisp
[statement of comparative expressive power].

> > In particular, I don't like a language having Turing-complete processes 
> > going on at compile-time.  This applies equally whether it is a 
> > self-contained sublanguage (such as Dylan macros or C++ templates) or 
> > the same language as you use for normal programming (as in CL procedural 
> > macros).  I think that macros are the appropriate tool for doing trivial 
> > syntatic rearrangements and that if you want to do something complex 
> > then the appropriate method is to use a tool that explicitly generates 
> > source code that the programmer can examine -- and debug.

This is like saying that some programs should always have to be done 
interactively and it should never be possible to write batch scripts.
Ugh.  

> But you can examine the source code generated by CL macros, and you
> can debug that stuff.

Indeed.  And what about things that ARE debugged?  And what about
potential users of the complicated macro that are not capable of
debugging it even if they could? (Look at Microsoft IE and its option
to turn off debugging of Javascript errors, which I'm sure most users
check.) It's pretty plain that even if you can't prove it 100%
debugged, you can still easily get to a point where the cumulative sum
of the certain time lost by running it always in this kind of debug
mode just in case there's a problem far exceeds the possible time lost
due to the occasional bug.
From: ········@hex.net
Subject: Re: On macro systems (was Re: Language intolerance)
Date: 
Message-ID: <wkpug1thwr.fsf@mail.hex.net>
>>>>> "Kent" == Kent M Pitman <······@world.std.com> writes:

Kent> "Pierre R. Mai" <····@acm.org> writes:
>> Bruce Hoult <·····@hoult.org> writes:
>> 
>> > I don't believe I said or implied that powerful implies easy
>> -- in fact > I believe the opposite is often true.
>> 
>> Given that being turing-completeness is so easy to achieve
>> (often even by accident), I don't think the expressive power of
>> a language can be defined by examining what can be computed in
>> the language, but rather what can reasonably be expressed in
>> the language by a human.

Kent> I think that "turing power" is a near useless concept because of
Kent> the problem you cite.  I agree another term that is unrelated to
Kent> turing needs to be devised, and I agree that the term
Kent> "expressive" should be used in this way, not defined by
Kent> computability but defined by something else more related to
Kent> practical reality.  It's underspecified even so, but I usually
Kent> tend to add "per unit time" in my mind when talking about
Kent> expressiveness.

Kent> For example, one CAN write an object-oriented program in Fortran
Kent> or C [statement of turing equivalence], but it takes longer than
Kent> in Lisp [statement of comparative expressive power].

Invoking the gremlin of "turing completeness" or "turing power" or
what have you seems to be a monkey that comes up to give the excuse
that some computing system displaying a gaping paucity of
expressiveness is, by some wild isomorphism, "as good as everything
else."

Some of the oft-not-so-gentle readers here might quickly throw that as
a dung-ball against the notion that Scheme is good for anything; I
would think it far more appropriate to throw the dung at the languages
that are _vastly_ less expressive such as Visual Basic and its ilk.

It would be nice to try to talk about some sort of "expressiveness
measure;" the fact that mathematicians and physicists and such keep on
creating new notations to describe the new areas of their disciplines
that pop up suggests to me that this would be a futile exercise.
(I'll allude to Godel and incompleteness here, but would certainly not
suggest any _provable_ connection...)

My suspicion is that "expressiveness" is a multiple-edged sword in any
case, thinking in the Kolmogorov complexity direction of things.
"Kolmogorov complexity" refers to the size of the minimum UTM capable
of producing a particular string of symbols.  

You might add some nifty operations to a UTM that allow the number of
instructions required to fall dramatically; if that leads to the
program being incomprehensible, it's not evident that you've got
something of improved expressiveness.

Heading to the more concrete, while APL is capable of doing vast
quantities of stuff in a single line of code, most people don't think
in the vector space terms required to properly harness that.  Which
probably explains the fact that there is a rather minscule niche of
APL programmers.

The fair degree of complexity to the CL specification similarly makes
it unsurprising that a lot of people find it too much to cope with.
It's expressive, to the point to which people get frightened...
-- 
(concatenate 'string "cbbrowne" ·@acm.org")
http://vip.hex.net/~cbbrowne/finances.html
Rules of the Evil Overlord #177.  "If a scientist with a beautiful and
unmarried  daughter  refuses to  work  for me,  I  will  not hold  her
hostage. Instead, I  will offer to pay for her  future wedding and her
children's college tuition." <http://www.eviloverlord.com/>
From: Barry Margolin
Subject: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <XXBn6.27$_47.11792@burlma1-snr2>
In article <··············@mail.hex.net>,  <········@hex.net> wrote:
>Invoking the gremlin of "turing completeness" or "turing power" or
>what have you seems to be a monkey that comes up to give the excuse
>that some computing system displaying a gaping paucity of
>expressiveness is, by some wild isomorphism, "as good as everything
>else."

Can you imagine if carpenters were like computer scientists?  Some of them
would argue that it's not necessary to own a hammer because the butt of a
screwdriver is naildriver-complete. :)

No wonder all other engineering disciplines laugh at us....

-- 
Barry Margolin, ······@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: ········@hex.net
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <W_En6.557$E57.33550@news4.aus1.giganews.com>
Barry Margolin <······@genuity.net> writes:
> In article <··············@mail.hex.net>,  <········@hex.net> wrote:
> >Invoking the gremlin of "turing completeness" or "turing power" or
> >what have you seems to be a monkey that comes up to give the excuse
> >that some computing system displaying a gaping paucity of
> >expressiveness is, by some wild isomorphism, "as good as everything
> >else."
> 
> Can you imagine if carpenters were like computer scientists?  Some of them
> would argue that it's not necessary to own a hammer because the butt of a
> screwdriver is naildriver-complete. :)
> 
> No wonder all other engineering disciplines laugh at us....

The flip side of this is that people can, and do, do pretty
appallingly powerful things using the "duct tape" of Perl scripts.
That generally _doesn't_ include writing things resembling Maple or
Macsyma, but the list of "useful things done" is probably pretty high
nonetheless...

-- 
(concatenate 'string "aa454" ·@freenet.carleton.ca")
http://www.ntlug.org/~cbbrowne/rdbms.html
Rules of  the Evil  Overlord #9. "I  will not include  a self-destruct
mechanism unless absolutely necessary. If it is necessary, it will not
be a  large red  button labelled  "Danger: Do Not  Push". The  big red
button marked "Do Not Push" will instead trigger a spray of bullets on
anyone  stupid enough to  disregard it.  Similarly, the  ON/OFF switch
will not clearly be labelled as such." <http://www.eviloverlord.com/>
From: Marco Antoniotti
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <y6cae74teiw.fsf@octagon.mrl.nyu.edu>
Barry Margolin <······@genuity.net> writes:

> In article <··············@mail.hex.net>,  <········@hex.net> wrote:
> >Invoking the gremlin of "turing completeness" or "turing power" or
> >what have you seems to be a monkey that comes up to give the excuse
> >that some computing system displaying a gaping paucity of
> >expressiveness is, by some wild isomorphism, "as good as everything
> >else."
> 
> Can you imagine if carpenters were like computer scientists?  Some of them
> would argue that it's not necessary to own a hammer because the butt of a
> screwdriver is naildriver-complete. :)
> 
> No wonder all other engineering disciplines laugh at us....

Yeah.  But them many engineers go ahead and either re-invent the wheel
or keep using the but of the screwdriver.

Informatically yours....

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Tim Bradshaw
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <nkjn1b4gweo.fsf@tfeb.org>
Barry Margolin <······@genuity.net> writes:

> 
> Can you imagine if carpenters were like computer scientists?  Some of them
> would argue that it's not necessary to own a hammer because the butt of a
> screwdriver is naildriver-complete. :)
> 

Another contingent of engineers would construct large civil
engineering projects with gaffa tape as a crucial component.
Periodically these would collapse inporrly-understood circumstances
with huge loss of life.

Others would refuse to install safety features such as guards on
machinery, seatbelts in cars, lifeboats on ships and so on, as it
results in loss of performance and increased cost and weight.  This
contingent would produce most of the engineered products we use, often
in league with the gaffa-tape contingent (who they secretly despise).

Others still would argue -- apparently seriously -- that all
engineering should be done based on quantum gravity.  Unfortunately,
lacking a satisfactory theory of quantum gravity, their projects would
be somewhat limited, if beautifully made.

> No wonder all other engineering disciplines laugh at us....

No wonder

--tim
From: Paul Wallich
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <pw-0203010959290001@192.168.1.100>
In article <···············@tfeb.org>, Tim Bradshaw <···@tfeb.org> wrote:

>Barry Margolin <······@genuity.net> writes:
>
>> 
>> Can you imagine if carpenters were like computer scientists?  Some of them
>> would argue that it's not necessary to own a hammer because the butt of a
>> screwdriver is naildriver-complete. :)
>> 
>
>Another contingent of engineers would construct large civil
>engineering projects with gaffa tape as a crucial component.
>Periodically these would collapse inporrly-understood circumstances
>with huge loss of life.
>
>Others would refuse to install safety features such as guards on
>machinery, seatbelts in cars, lifeboats on ships and so on, as it
>results in loss of performance and increased cost and weight.  This
>contingent would produce most of the engineered products we use, often
>in league with the gaffa-tape contingent (who they secretly despise).

Which part of the 19th and early 20th centuries doesn't this decribe?
The main difference is that mass production of bits is so much easier
than mass production of bits of metal.

Of course turing-equivalence also goes the other way: all nonworking
systems are equivalent regardless of how elegant or inelegant their
design...

paul
From: Tim Bradshaw
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <ey3vgprzke2.fsf@cley.com>
* Paul Wallich wrote:

> Which part of the 19th and early 20th centuries doesn't this decribe?
> The main difference is that mass production of bits is so much easier
> than mass production of bits of metal.

I think that's a good point -- software development is between one and
two hundred years behind engineering disciplines.  Unfortunately we
don't seem to be very good at taking any notice of what they learned
in that one to two hundred years.

--tim
From: Kent M Pitman
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <sfwg0gvj5bg.fsf@world.std.com>
Tim Bradshaw <···@cley.com> writes:

> I think that's a good point -- software development is between one and
> two hundred years behind engineering disciplines.  Unfortunately we
> don't seem to be very good at taking any notice of what they learned
> in that one to two hundred years.

That's double-edged--i.e., sometimes a good thing.

The one thing I'm fearful they'll pick up from that other discipline
is accreditation and/or licensing.  We see it being sold for
particular products, and maybe that's ok, but it should not be sold
for the whole information industry or a Bad Thing will happen.  The
first thing that would happen, apropos this newsgroup, would be the 
closing down of alternate languages.
From: Tim Bradshaw
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <ey3r90ezku0.fsf@cley.com>
* Kent M Pitman wrote:

> The one thing I'm fearful they'll pick up from that other discipline
> is accreditation and/or licensing.  We see it being sold for
> particular products, and maybe that's ok, but it should not be sold
> for the whole information industry or a Bad Thing will happen.  The
> first thing that would happen, apropos this newsgroup, would be the 
> closing down of alternate languages.

I think that this is wrong, but in a peculiar way -- I'll try
to explain what I think but probably get it wrong.

I agree with you that if accreditation became something you needed it
would result in a lot of bad things, like closing down of less-used
languages, rigid and stupid development methodologies and so on.

*But* I think that accreditation is not itself harmful -- it's harmful
because software development is at such a rudimentary stage.  I don't
think that accreditation for a civil engineer is harmful, because what
it says is something like `this person knows how to build structures
which are safe and maintainable and so on'.  It *doesn't* say that
`this person will rigidly insist that all the things they build are
made of concrete and will not consider steel tension structures'.

I think that the problem is that software development, as it currently
stands, consists of various squabbling cults (we call them
`methodologies'), and accreditation would be a way for one of these
cults to oust the others.  Engineering disciplines aren't like that.

--tim
From: Kent M Pitman
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <sfwd7bypnzu.fsf@world.std.com>
Tim Bradshaw <···@cley.com> writes:

> I think that the problem is that software development, as it currently
> stands, consists of various squabbling cults (we call them
> `methodologies'), and accreditation would be a way for one of these
> cults to oust the others.  Engineering disciplines aren't like that.

This is what I meant.

Computer Science is mostly Computer Religion.  Hence your use of the term 
"cult" is dead on.  The last thing we need is a standardized religion.
From: David Thornley
Subject: Re: On Turing equivalence (was Re: On macro systems (was Re: Language intolerance))
Date: 
Message-ID: <lQTo6.591$y6.145251@ruti.visi.com>
In article <···············@world.std.com>,
Kent M Pitman  <······@world.std.com> wrote:
>Tim Bradshaw <···@cley.com> writes:
>
>> I think that the problem is that software development, as it currently
>> stands, consists of various squabbling cults (we call them
>> `methodologies'), and accreditation would be a way for one of these
>> cults to oust the others.  Engineering disciplines aren't like that.
>
>This is what I meant.
>
>Computer Science is mostly Computer Religion.  Hence your use of the term 
>"cult" is dead on.  The last thing we need is a standardized religion.

Computer Science is a legitimate science, in its way, but it comprises
too many things.  Imagine if physics departments taught Mechanical and
Electrical Engineering - then imagine any sort of complicated useful
device designed by physics majors.  There will eventually be a known
corpus of knowledge to build Software Engineering around, but I
don't think we're there yet.

The interesting part here is that most shops (in my experience, and
according to what statistics I've seen) are at about a Nineteenth
Century engineering level (without the option to overbuild things,
a la the Brooklyn Bridge).  Any attempt, at this time, to license
software engineers properly would mean that some 90% of computer
shops would have nobody that would qualify, and would not be able
to hire a licensed engineer to certify processes without a great
deal of disruption, and I don't think that's politically possible
right now.  (I assume we're talking about government-backed
certification efforts.  There have been other certification processes
in the past, and AFAICT nobody ever paid much attention to these.)

So, if there were a serious certification effort, it couldn't be
based on what we know about software engineering now (which
wouldn't be a bad thing, really).  In order to be exclusive
enough to be politically worth doing, it would therefore have to
concentrate on other things, likely including knowledge of specialized
things.  To make a vague effort at being on-topic, it would likely
define object orientation to be encapsulation, inheritance, and
polymorphism, and therefore CLOS would legally not be an object-oriented
system.

So, while it would be really interesting to see ISO attempt to
standardize religion (ANSI standardization would be too boring -
arguably the US has a standard religion that could be codified
using Billy Graham to write the base document), I don't want to
see any effort to certify Software Engineers any time soon.

--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Paul Dietz
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <3A9EDB43.8BEA6CD6@stc.comm.mot.com>
Bruce Hoult wrote:

>  I think that macros are the appropriate tool for doing trivial
> syntatic rearrangements and that if you want to do something complex
> then the appropriate method is to use a tool that explicitly generates
> source code that the programmer can examine -- and debug.

I could not disagree more.  The ability to layer new language
features on top of lisp and to use them interactively (vs.
going through a batch-oriented preprocessor) is a feature
I use all the time.

If anything, Common Lisp macros are not powerful enough
for my taste.  I'd like to see environment information (for
example, declared and infered types of expressions) be available
to the macros (or, at least, to compiler macros).

If you want to examine the macro expanded code, you can use
a code walker/expander.

	Paul
From: Lieven Marchand
Subject: Re: Language intolerance (was Re: Is Scheme a `Lisp'?)
Date: 
Message-ID: <m3d7c1gqap.fsf@localhost.localdomain>
Bruce Hoult <·····@hoult.org> writes:

> In article <············@content-integrity.com>, Joe Marshall 
> <···@content-integrity.com> wrote:
> 
> > Bruce Hoult <·····@hoult.org> writes:
> > 
> > > Under-powered?  A strange thing to say about something that is 
> > > (regrettably, IMHO) Turing-complete...
> > 
> > A lot of things are Turing-complete (for instance, a Turing machine).
> > That doesn't necessarily imply ease of programming.
> > 
> > sendmail config files are Turing complete, too.
> 
> I don't believe I said or implied that powerful implies easy -- in fact 
> I believe the opposite is often true.
> 
> In particular, I don't like a language having Turing-complete processes 
> going on at compile-time.  This applies equally whether it is a 
> self-contained sublanguage (such as Dylan macros or C++ templates) or 
> the same language as you use for normal programming (as in CL procedural 
> macros).  I think that macros are the appropriate tool for doing trivial 
> syntatic rearrangements and that if you want to do something complex 
> then the appropriate method is to use a tool that explicitly generates 
> source code that the programmer can examine -- and debug.

Macro-expand is in the standard. Most implementations have also walker
functionality that will recursively expand inner forms until
everything referenced is primitive.

Would you really want to write LOOP or SERIES in Dylan macros, and
could it be done?

-- 
Lieven Marchand <···@wyrd.be>
Gla�r ok reifr skyli gumna hverr, unz sinn b��r bana.