From: Marc Spitzer
Subject: Why lisp is growing
Date: 
Message-ID: <86wumi23v3.fsf@bogomips.optonline.net>
here we go

1: 2 new lisp vendors, corman and scl
2: more messages on cll and new faces showing up
3: growing number of open source projects for cl
4: UFFI, get n FFI's for the price of 1
5: several lisp gui's are available
6: new user's groups springing up, ok at least 1.
7: it is pure fun to work with

I am sure I forgot some obvious stuff so please add
it to the lisp, for fun and google.

marc  

From: Paul F. Dietz
Subject: Re: Why lisp is growing
Date: 
Message-ID: <_AWdnfhnIewu_2igXTWcog@dls.net>
Marc Spitzer wrote:
> here we go
> 
> 1: 2 new lisp vendors, corman and scl
> 2: more messages on cll and new faces showing up
> 3: growing number of open source projects for cl
> 4: UFFI, get n FFI's for the price of 1
> 5: several lisp gui's are available
> 6: new user's groups springing up, ok at least 1.
> 7: it is pure fun to work with

8: ?
9: Profit!

	Paul
From: larry
Subject: Re: Why lisp is growing
Date: 
Message-ID: <7b8f89d6.0212101002.2e592006@posting.google.com>
I was wondering why you don't see articles in computer magazines
extolling the virtues Lisp.I've seen such articles about Ruby,
Python,Perl. Is the dearth or articles on Lisp because Lisp people
don't feel the need
to proselytize and so don't write magazine articles or is it because
computer magazines refuse to publish positive articles about Lisp?


"Paul F. Dietz" <·····@dls.net> wrote in message news:<······················@dls.net>...
> Marc Spitzer wrote:
> > here we go
> > 
> > 1: 2 new lisp vendors, corman and scl
> > 2: more messages on cll and new faces showing up
> > 3: growing number of open source projects for cl
> > 4: UFFI, get n FFI's for the price of 1
> > 5: several lisp gui's are available
> > 6: new user's groups springing up, ok at least 1.
> > 7: it is pure fun to work with
> 
> 8: ?
> 9: Profit!
> 
> 	Paul
From: Jochen Schmidt
Subject: Re: Why lisp is growing
Date: 
Message-ID: <at5dsl$u6b$02$1@news.t-online.com>
larry wrote:

> I was wondering why you don't see articles in computer magazines
> extolling the virtues Lisp.I've seen such articles about Ruby,
> Python,Perl. Is the dearth or articles on Lisp because Lisp people
> don't feel the need
> to proselytize and so don't write magazine articles or is it because
> computer magazines refuse to publish positive articles about Lisp?


At least in Germany there were several articles in several magazines 
published within the last and this year.

ciao,
Jochen

--
http://www.dataheaven.de
From: Andreas Hinze
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DF64D50.46A6C8A0@smi.de>
larry wrote:
> 
> I was wondering why you don't see articles in computer magazines
> extolling the virtues Lisp.I've seen such articles about Ruby,
> Python,Perl. Is the dearth or articles on Lisp because Lisp people
> don't feel the need
> to proselytize and so don't write magazine articles or is it because
> computer magazines refuse to publish positive articles about Lisp?
> 
There are not only articles. There are also lisp events (in german language,sorry):

http://www.mdlug.de/index.php/kalender/themenabend-lisp.inc?LAYOUT=mdlugv3

Regards
AHz
From: Nils Kassube
Subject: Re: Why lisp is growing
Date: 
Message-ID: <81fzt1x3rd.fsf@darwin.lan.kassube.de>
··········@hotmail.com (larry) writes:

> Python,Perl. Is the dearth or articles on Lisp because Lisp people
> don't feel the need to proselytize and so don't write magazine
> articles or is it because computer magazines refuse to publish
> positive articles about Lisp?

A good article takes a lot of work. The money you earn by writing 
computer magazine articles is almost never worth the effort. Many
articles are just PR for other things its author sells: consulting,
training, whatever. Maybe we can deduce that all competent Lisp
programmers are already busy with more lucrative business :-)
From: Kenny Tilton
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DFA2117.5000707@nyc.rr.com>
> ··········@hotmail.com (larry) writes:
> 
> 
>>Python,Perl. Is the dearth or articles on Lisp because Lisp people
>>don't feel the need to proselytize and so don't write magazine
>>articles or is it because computer magazines refuse to publish
>>positive articles about Lisp?

Someone in the audience at ILC during an exchange mentioned in passing 
that OOPSLA would not take papers involving CLOS. I did not get a chance 
to follow up to confirm I heard what I thought I heard, but they did go 
on to say OOPSLA just wanted papers on <something> so pretty sure I 
heard right.

And I once had a brief discussion with a MacTech editor about doing 
something on MCL, but he steered it into "what can you do in MCL to help 
specifically with Mac programming issues", away from "what a great tool 
in general for development".

All this reminiscent of O'Reilly's "Lisp books? we don't need no 
stinking Lisp books!".

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Cells let us walk, talk, think, make love and realize
  the bath water is cold." -- Lorraine Lee Cudmore
From: Michael Sullivan
Subject: Re: Why lisp is growing
Date: 
Message-ID: <1fne41w.1unt8fyclcs74N%michael@bcect.com>
Paul F. Dietz <·····@dls.net> wrote:

> Marc Spitzer wrote:
> > here we go
> > 
> > 1: 2 new lisp vendors, corman and scl
> > 2: more messages on cll and new faces showing up
> > 3: growing number of open source projects for cl
> > 4: UFFI, get n FFI's for the price of 1
> > 5: several lisp gui's are available
> > 6: new user's groups springing up, ok at least 1.
> > 7: it is pure fun to work with

> 8: ?
> 9: Profit!
 

YOMANK.


The response to this has been tragic.  Absolutely tragic.   Doesn't
anyone else watch South Park?


Michael

-- 
Michael Sullivan
Business Card Express of CT             Thermographers to the Trade
Cheshire, CT                                      ·······@bcect.com
From: Kenny Tilton
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3E00DCF5.6040106@nyc.rr.com>
Michael Sullivan wrote:

> The response to this has been tragic.  Absolutely tragic.   Doesn't
> anyone else watch South Park?

mrum mm mmph rmmf.


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Cells let us walk, talk, think, make love and realize
  the bath water is cold." -- Lorraine Lee Cudmore
From: Peter Seibel
Subject: Re: Why lisp is growing
Date: 
Message-ID: <m3ptsaemx9.fsf@localhost.localdomain>
Marc Spitzer <········@optonline.net> writes:

> 4: UFFI, get n FFI's for the price of 1

Just out of curiosity, are any implementors (commercial or otherwise)
thinking about moving their own FFI toward the UFFI? Are there great
differences between different FFI that make them a competitive
advantage or is that an area where there could be a reasonable de
facto standard?

-Peter

-- 
Peter Seibel
·····@javamonkey.com
From: Gabe Garza
Subject: Re: Why lisp is growing
Date: 
Message-ID: <87znrey82a.fsf@ix.netcom.com>
Peter Seibel <·····@javamonkey.com> writes:

> Marc Spitzer <········@optonline.net> writes:
> 
> > 4: UFFI, get n FFI's for the price of 1
> 
> Just out of curiosity, are any implementors (commercial or otherwise)
> thinking about moving their own FFI toward the UFFI? Are there great
> differences between different FFI that make them a competitive
> advantage or is that an area where there could be a reasonable de
> facto standard?

The biggest obstacle I can see to implementations adopting UFFI is
that some implementations (notably LispWorks and ACL(??)) provide a
"call back" mechanism so you can call Lisp from other languages but
most implementations don't.  UFFI doesn't have this functionality;
perhaps it could move from "an implementation that UFFI supports
provides all of this functionality" to "an implementation that UFFI
supports provides a (possibly improper) subset of this functionality."

Gabe Garza
From: Roger Corman
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3df5495f.268544726@nntp.sonic.net>
On Tue, 10 Dec 2002 01:25:33 GMT, Gabe Garza <·······@ix.netcom.com>
wrote:

>Peter Seibel <·····@javamonkey.com> writes:
>
>> Marc Spitzer <········@optonline.net> writes:
>> 
>> > 4: UFFI, get n FFI's for the price of 1
>> 
>> Just out of curiosity, are any implementors (commercial or otherwise)
>> thinking about moving their own FFI toward the UFFI? Are there great
>> differences between different FFI that make them a competitive
>> advantage or is that an area where there could be a reasonable de
>> facto standard?
>
>The biggest obstacle I can see to implementations adopting UFFI is
>that some implementations (notably LispWorks and ACL(??)) provide a
>"call back" mechanism so you can call Lisp from other languages but
>most implementations don't.  UFFI doesn't have this functionality;
>perhaps it could move from "an implementation that UFFI supports
>provides all of this functionality" to "an implementation that UFFI
>supports provides a (possibly improper) subset of this functionality."
>

Corman Lisp supports callbacks as well. Any complete Windows
implementation needs to support them, because so much of the Win32 API
is based on callbacks. Also, I believe Lisp has big potential for
creating extensions, or plug-ins. In Windows this is done with DLLs,
and included functions must all be callable by foreign code.

Roger
From: Tim Bradshaw
Subject: Re: Why lisp is growing
Date: 
Message-ID: <ey3n0newbjo.fsf@cley.com>
* Peter Seibel wrote:
> Just out of curiosity, are any implementors (commercial or otherwise)
> thinking about moving their own FFI toward the UFFI? Are there great
> differences between different FFI that make them a competitive
> advantage or is that an area where there could be a reasonable de
> facto standard?

last time I looked at UFFI (which was months ago, I would be glad to
be wrong about what it can do now) it had two crippling issues which
would mean I just could never use it:

- No callbacks, (calls from C into Lisp) which are essential for
  things like window system programming, where the underlying system
  needs to call into Lisp on events.

- No way of specifying external formats for strings, and having the
  conversion to a Lisp string just happen.  So for instance if you
  have something that deals with UTF-8 encoded strings, you have to
  write your own UTF-8 encoder/decoder.

The application I've spent most of my time on since the start of the
year uses expat for XML parsing, which needs callbacks, and deals in
UTF-8 encoded strings.  So UFFI isn't much use for me at the moment,
while the implementation's native FFI just works.

--tim
From: Kenny Tilton
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DF5F37D.3050903@nyc.rr.com>
Tim Bradshaw wrote:
> * Peter Seibel wrote:
> 
>>Just out of curiosity, are any implementors (commercial or otherwise)
>>thinking about moving their own FFI toward the UFFI? Are there great
>>differences between different FFI that make them a competitive
>>advantage or is that an area where there could be a reasonable de
>>facto standard?
> 
> 
> last time I looked at UFFI (which was months ago, I would be glad to
> be wrong about what it can do now) it had two crippling issues which
> would mean I just could never use it:
> 
> - No callbacks, (calls from C into Lisp) which are essential for
>   things like window system programming, where the underlying system
>   needs to call into Lisp on events.

Well for a cross-platform like Cells, UFFI saves me a lot of trouble. 
When I get callbacks I'll do something UFFI-like of my own for the 
callbacks, but I would be doing that in any case. UFFI is not costing me 
anything, but it is giving me a lot in other areas.

"Crippling" is a bit harsh, but I am guessing you just need to get your 
stuff working with one FFI, so UFFI has nothing to offer and all you are 
left with are the shortcomings arising from the LCD design.

> 
> - No way of specifying external formats for strings, and having the
>   conversion to a Lisp string just happen.  

that would be sweet.


-- 

  kenny tilton
  clinisys, inc
  ---------------------------------------------------------------
""Well, I've wrestled with reality for thirty-five years, Doctor,
   and I'm happy to state I finally won out over it.""
                                                   Elwood P. Dowd
From: Tim Bradshaw
Subject: Re: Why lisp is growing
Date: 
Message-ID: <ey3n0ndvrgs.fsf@cley.com>
* Kenny Tilton wrote:
> "Crippling" is a bit harsh, but I am guessing you just need to get
> your stuff working with one FFI, so UFFI has nothing to offer and all
> you are left with are the shortcomings arising from the LCD design.

Well, I consider not being able to get my program to run at all on
*any* implementation with UFFI to be pretty crippling, actually.
Maybe `fails to work for the same reasons on all platforms' is seen as
better than `works on several platforms, with porting effort' in
somebody's world.

--tim
From: Matthew Danish
Subject: Re: Why lisp is growing
Date: 
Message-ID: <20021210145730.J8053@lain.cheme.cmu.edu>
On Tue, Dec 10, 2002 at 12:31:55AM +0000, Peter Seibel wrote:
> Marc Spitzer <········@optonline.net> writes:
> 
> > 4: UFFI, get n FFI's for the price of 1
> 
> Just out of curiosity, are any implementors (commercial or otherwise)
> thinking about moving their own FFI toward the UFFI? Are there great
> differences between different FFI that make them a competitive
> advantage or is that an area where there could be a reasonable de
> facto standard?

UFFI as it exists currently is the subset of FFI functionality that can
be supported across all platforms.  This is obviously not suitable for a
real FFI.  I would like to see an effort to create such a standardized
FFI, especially after all the pitfalls I encountered working with CL-SDL
and UFFI.  When I have some time I plan on drawing up a basic interface,
but I don't anticipate anything happening soon.  There are a number of
inadequacies with the current interfaces, such as macro-based `static'
interfaces where you cannot specify types and slots to be determined at
runtime.  I have yet to determine whether this is simply an
implementation choice or something necessary.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Kenny Tilton
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DF53A67.2030704@nyc.rr.com>
How about the astonishing number of CL implementations available, and 
specificaly the number of open/free/whatever-you-call-them 
implementations? I was reminded of this by the recent ECL announcemnt 
here. Granted that is subsumed in part by #3, but might be worthy of its 
own itemization.

Marc Spitzer wrote:
> here we go
> 
> 1: 2 new lisp vendors, corman and scl
> 2: more messages on cll and new faces showing up
> 3: growing number of open source projects for cl
> 4: UFFI, get n FFI's for the price of 1
> 5: several lisp gui's are available
> 6: new user's groups springing up, ok at least 1.
> 7: it is pure fun to work with
> 
> I am sure I forgot some obvious stuff so please add
> it to the lisp, for fun and google.
> 
> marc  


-- 

  kenny tilton
  clinisys, inc
  ---------------------------------------------------------------
""Well, I've wrestled with reality for thirty-five years, Doctor,
   and I'm happy to state I finally won out over it.""
                                                   Elwood P. Dowd
From: Tim Bradshaw
Subject: Re: Why lisp is growing
Date: 
Message-ID: <ey3isy2w8hn.fsf@cley.com>
* Kenny Tilton wrote:
> How about the astonishing number of CL implementations available, and
> specificaly the number of open/free/whatever-you-call-them
> implementations? I was reminded of this by the recent ECL announcemnt
> here. Granted that is subsumed in part by #3, but might be worthy of
> its own itemization.

The `languages' (really implementations) perceived to be most
successful - Perl, Python &co often have essentially only one
implementation.  I don't think implementation count correlates with
`success' at all.

--tim
From: Skull
Subject: Re: Why lisp is growing
Date: 
Message-ID: <5a6830f6.0212100735.4fcba2cc@posting.google.com>
Tim Bradshaw <···@cley.com> wrote in message news:<···············@cley.com>...
> * Kenny Tilton wrote:
> > How about the astonishing number of CL implementations available, and
> > specificaly the number of open/free/whatever-you-call-them
> > implementations? I was reminded of this by the recent ECL announcemnt
> > here. Granted that is subsumed in part by #3, but might be worthy of
> > its own itemization.
> 
> The `languages' (really implementations) perceived to be most
> successful - Perl, Python &co often have essentially only one
> implementation.  I don't think implementation count correlates with
> `success' at all.
> 
> --tim

PeopleCode :-(
From: Kenny Tilton
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DF5F64F.8050807@nyc.rr.com>
Tim Bradshaw wrote:
> * Kenny Tilton wrote:
> 
>>How about the astonishing number of CL implementations available, 
> 
> ....  I don't think implementation count correlates with
> `success' at all.
> 

You confuse necessary with sufficient. You also think we are saying 
"Lisp roolz", when we are saying "Lisp will soon rool". Note the subject 
line.

-- 

  kenny tilton
  clinisys, inc
  ---------------------------------------------------------------
""Well, I've wrestled with reality for thirty-five years, Doctor,
   and I'm happy to state I finally won out over it.""
                                                   Elwood P. Dowd
From: Tim Bradshaw
Subject: Re: Why lisp is growing
Date: 
Message-ID: <ey3r8cpvrke.fsf@cley.com>
* Kenny Tilton wrote:

> You confuse necessary with sufficient. You also think we are saying
> "Lisp roolz", when we are saying "Lisp will soon rool". Note the
> subject line.

You confuse statistics with logic, and make silly inferences about my
mental state to boot.  Read what I *wrote* not what you think I meant.

--tim
From: Daniel Barlow
Subject: Re: Why lisp is growing
Date: 
Message-ID: <87vg21srad.fsf@noetbook.telent.net>
Tim Bradshaw <···@cley.com> writes:

> * Kenny Tilton wrote:
>> How about the astonishing number of CL implementations available, and
>> specificaly the number of open/free/whatever-you-call-them
>> implementations? I was reminded of this by the recent ECL announcemnt
>> here. Granted that is subsumed in part by #3, but might be worthy of
>> its own itemization.
>
> The `languages' (really implementations) perceived to be most
> successful - Perl, Python &co often have essentially only one
> implementation.  I don't think implementation count correlates with
> `success' at all.

Who said anything about 'success'?  The thread is discussing the
reasons that Lisp is 'growing'.

'Course, they haven't defined 'growing' either.  Maybe the GC is leaking.


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Ed Symanzik
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DF644A0.EEBA7746@msu.edu>
Daniel Barlow wrote:
> 
> The thread is discussing the reasons that Lisp is 'growing'.

What got me interested was seeing three slashdot articles in one 
month related to Lisp.

Two articles by Paul Graham, "Beating the Averages" and "A Plan
for Spam" - he's a great preacher.  Also, an article comparing the 
performance of Lisp and Java showed up while I was investigating 
Java.

Following up, I discovered a wealth of information online, including
articles, code, tutorials, books(!), and free implementations.
So far, I've been able to find information written to my level
as my exploration expands.

My two biggest obstacles right now are:
 - My vi background.
 - Lack of a simple project to start on.
From: Heow
Subject: Re: Why lisp is growing
Date: 
Message-ID: <18477636.0212101653.66c3d0c8@posting.google.com>
> > The thread is discussing the reasons that Lisp is 'growing'.

What set me off on my recent "Lisp Revival" is that since this is my
10th year of professional development, I'm being a little
introspective.  So what do I have to show for 10 years of development?
 Well, a bunch of C++/Java skills as well as lots of "non-trivial"
problems that simply couldn't be answered no matter which design
patterns you throw at them.  Oh yeah, and ~50% of my projects either
failed in spectacular ways or simply never shipped- I'm still trying
to mentally reconcile that.

Yeah, I'll admit I'm also a little bitter as I expected functional
languages to take over the world 5 years ago.  On the bright side
though, the rise of Perl, Python, and Ruby have proven to me that PHBs
are finally groking the value of developer time.  I think that this
unique period could prove to be beneficial to the lisp community- 
especially considering the current amount of "free or nearly-so" Lisp
tools and information available.

Also I feel like I have a bit of catching up to do.  10 years ago
Common Lisp was anything but, and although CLOS technically did exist,
I wasn't aware of it.  But to be fair, during that same time, my peers
were driving me nuts by putting C++ on their resumes just because they
liked to use the keen '//' comment operator.

Besides all that, Lisp is fun!   You can produce interesting code to
wrap your head around, where in the C++/Java world, all I have to look
forward to anymore is the "Obfuscated C Contest".

Ok, I'm done now, move along folks.  :-)

 - Heow
From: Frank A. Adrian
Subject: Re: Why lisp is growing
Date: 
Message-ID: <X6zJ9.76$7t2.190491@news.uswest.net>
Heow wrote:
> Besides all that, Lisp is fun!

This is the essential and most important reason!

faa
From: Oleg
Subject: Re: Why lisp is growing
Date: 
Message-ID: <at6dqe$7no$1@newsmaster.cc.columbia.edu>
Heow wrote:

> Besides all that, Lisp is fun!   You can produce interesting code to
> wrap your head around, where in the C++/Java world, all I have to look
> forward to anymore is the "Obfuscated C Contest".


<quote source="http://mindprod.com/unmaintricks.html">

LISP is a dream language for the writer of unmaintainable code. Consider 
these baffling fragments:

(lambda (*<8-]= *<8-[= ) (or *<8-]= *<8-[= ))
(defun :-] (<) (= < 2))
(defun !(!)(if(and(funcall(lambda(!)(if(and '(< 0)(< ! 2))1 nil))(1+ !))
(not(null '(lambda(!)(if(< 1 !)t nil)))))1(* !(!(1- !))))) 

</quote>

BTW, the web site itself is a very entertaining read.

Oleg
From: Kenny Tilton
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DF6922E.4080104@nyc.rr.com>
Heow wrote:
> Oh yeah, and ~50% of my projects either
> failed in spectacular ways or simply never shipped- I'm still trying
> to mentally reconcile that.

No need. Much of my best work never went into production. One time I got 
put on a two-year old dying project. Started from scratch and was ready 
for acceptance testing in two months. Users had lost interest by then.

So blame management. If you are management, blame Canada.

:)


-- 

  kenny tilton
  clinisys, inc
  ---------------------------------------------------------------
""Well, I've wrestled with reality for thirty-five years, Doctor,
   and I'm happy to state I finally won out over it.""
                                                   Elwood P. Dowd
From: Hannah Schroeter
Subject: Re: Why lisp is growing
Date: 
Message-ID: <at5ik2$tjl$1@c3po.schlund.de>
Hello!

Ed Symanzik  <···@msu.edu> wrote:
>[...]

>My two biggest obstacles right now are:
> - My vi background.

You're not alone. I'm using vim 6.x with :set lisp ai sm

Perhaps still not ilisp, but usable.

> - Lack of a simple project to start on.

Use your imagination *g*

Kind regards,

Hannah.
From: Pascal Bourguignon
Subject: Re: Why lisp is growing
Date: 
Message-ID: <874r99o9xq.fsf@thalassa.informatimago.com>
Ed Symanzik <···@msu.edu> writes:

> Daniel Barlow wrote:
> > 
> > The thread is discussing the reasons that Lisp is 'growing'.
> 
> What got me interested was seeing three slashdot articles in one 
> month related to Lisp.
> 
> Two articles by Paul Graham, "Beating the Averages" and "A Plan
> for Spam" - he's a great preacher.  Also, an article comparing the 
> performance of Lisp and Java showed up while I was investigating 
> Java.
> 
> Following up, I discovered a wealth of information online, including
> articles, code, tutorials, books(!), and free implementations.
> So far, I've been able to find information written to my level
> as my exploration expands.
> 
> My two biggest obstacles right now are:
>  - My vi background.

You're  surely aware (but  just in  case) of  emacs and  it's numerous
vi-like modes and that it speaks lisp nativelym aren't you.

>  - Lack of a simple project to start on.

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
There is a fault in reality. do not adjust your minds. -- Salman Rushdie
From: Hannah Schroeter
Subject: Re: Why lisp is growing
Date: 
Message-ID: <aub15d$e4$1@c3po.schlund.de>
Hello!

Pascal Bourguignon  <···@informatimago.com> wrote:
>[...]

>You're  surely aware (but  just in  case) of  emacs and  it's numerous
>vi-like modes and that it speaks lisp nativelym aren't you.

If you have used the real thing (i.e. some vi, especially vim),
you'll find the emacs based vi emulations lacking.

Kind regards,

Hannah.
From: Tim Bradshaw
Subject: Re: Why lisp is growing
Date: 
Message-ID: <ey3n0ndk8z1.fsf@cley.com>
* Daniel Barlow wrote:
> Tim Bradshaw <···@cley.com> writes:

> Who said anything about 'success'?  The thread is discussing the
> reasons that Lisp is 'growing'.

> 'Course, they haven't defined 'growing' either.  Maybe the GC is leaking.

Well, I guess, in the self-fulfilling sense that there are more
implementations than there used to be this might mean something (but
then why quote all the other reasons it's `growing').  I assumed that
`growing' had some more useful definition like `growing in number of
programmers' or `growing in number and significance of deployed
applications' or something.  But I guess making assumptions like that
is somewhat foolish, given the response I got...

--tim
From: Chris Gehlker
Subject: Re: Why lisp is growing
Date: 
Message-ID: <BA1B5898.24400%gehlker@fastq.com>
On 12/9/02 3:55 PM, in article ··············@bogomips.optonline.net, "Marc
Spitzer" <········@optonline.net> wrote:

> here we go
> 
> 1: 2 new lisp vendors, corman and scl
> 2: more messages on cll and new faces showing up
> 3: growing number of open source projects for cl
> 4: UFFI, get n FFI's for the price of 1
> 5: several lisp gui's are available
> 6: new user's groups springing up, ok at least 1.
> 7: it is pure fun to work with
> 
> I am sure I forgot some obvious stuff so please add
> it to the lisp, for fun and google.


Another way to characterize the same facts is to think of Lisp eras. In the
first era Lisp was limited to being a research language at a few
Universities. Dialects were largely incompatible. The next era saw the raise
of dedicated Lisp hardware. Lisp moved out of the Universities into
corporations but it was still not something that a individual could
reasonably hope to use. The third era could be called the workstation era.
Lisp was available for commodity hardware but a typical Lisp development
environment still cost several times the going rate for a C/C++/Java/Pascal
setup. Finally we have the current, popular era where Lisp is available free
or commercially at competitive prices for commodity hardware. I don't think
that the word is really out there though. Think how different the world
might be if Borland had introduced TurboLisp instead of TurboPascal.



-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Dave Pearson
Subject: Re: Why lisp is growing
Date: 
Message-ID: <slrnavc4mf.pp8.davep.news@hagbard.davep.org>
* Chris Gehlker <·······@fastq.com>:

>[SNIP]                                     Think how different the world
> might be if Borland had introduced TurboLisp instead of TurboPascal.

I've got this vague recollection that Borland did sell a Lisp product at one
point in their early days. Am I getting confused?

-- 
Dave Pearson:                   |     lbdb.el - LBDB interface.
http://www.davep.org/           |  sawfish.el - Sawfish mode.
Emacs:                          |  uptimes.el - Record emacs uptimes.
http://www.davep.org/emacs/     | quickurl.el - Recall lists of URLs.
From: Chris Gehlker
Subject: Re: Why lisp is growing
Date: 
Message-ID: <BA1BA428.24431%gehlker@fastq.com>
On 12/10/02 9:14 AM, in article ·························@hagbard.davep.org,
"Dave Pearson" <··········@davep.org> wrote:

> * Chris Gehlker <·······@fastq.com>:
> 
>> [SNIP]                                     Think how different the world
>> might be if Borland had introduced TurboLisp instead of TurboPascal.
> 
> I've got this vague recollection that Borland did sell a Lisp product at one
> point in their early days. Am I getting confused?

I Google search turned up a reference to an AutoLisp IDE that they
apparently sold at one point.



-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Pascal Costanza
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DF614F6.7070806@web.de>
Dave Pearson wrote:
> * Chris Gehlker <·······@fastq.com>:
> 
> 
>>[SNIP]                                     Think how different the world
>>might be if Borland had introduced TurboLisp instead of TurboPascal.
> 
> 
> I've got this vague recollection that Borland did sell a Lisp product at one
> point in their early days. Am I getting confused?
> 

They definitely had a TurboProlog!

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Paolo Amoroso
Subject: Re: Why lisp is growing
Date: 
Message-ID: <9Or1PUEJ6SRosq6JSJqYEpK0xzcL@4ax.com>
On Mon, 09 Dec 2002 22:55:54 GMT, Marc Spitzer <········@optonline.net>
wrote:

> 1: 2 new lisp vendors, corman and scl

The grand total of currently in business _commercial_ vendors of Common
Lisp systems is 9. There are also 6 currently maintained open-source Common
Lisp implementations.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
From: Bruce Hoult
Subject: Re: Why lisp is growing
Date: 
Message-ID: <bruce-07D2CF.21164811122002@copper.ipg.tsnz.net>
In article <····························@4ax.com>,
 Paolo Amoroso <·······@mclink.it> wrote:

> On Mon, 09 Dec 2002 22:55:54 GMT, Marc Spitzer <········@optonline.net>
> wrote:
> 
> > 1: 2 new lisp vendors, corman and scl
> 
> The grand total of currently in business _commercial_ vendors of Common
> Lisp systems is 9. There are also 6 currently maintained open-source Common
> Lisp implementations.

I've got to say that's a lot.

How many C++ compilers are there these days?  Fewer than nine, perhaps.

-- Bruce
From: Hannah Schroeter
Subject: Re: Why lisp is growing
Date: 
Message-ID: <atad5a$qah$2@c3po.schlund.de>
Hello!

Bruce Hoult  <·····@hoult.org> wrote:
>[...]

>> The grand total of currently in business _commercial_ vendors of Common
>> Lisp systems is 9. There are also 6 currently maintained open-source Common
>> Lisp implementations.

>I've got to say that's a lot.

That's right.

>How many C++ compilers are there these days?  Fewer than nine, perhaps.

More, I guess. The free ones may well be less than in the CL case
(gcc, tcc come to my mind), but the commercial ones are many, about
one per CPU architecture at least (the CPU vendor's one), plus
non-CPU-vendor versions (MicroSoft, Borland [or whatever their name
is today], etc.).

Kind regards,

Hannah.
From: Steven E. Harris
Subject: Re: Why lisp is growing
Date: 
Message-ID: <m365ttczxk.fsf@LUX.speakeasy.org>
······@schlund.de (Hannah Schroeter) writes:

> the commercial ones are many

Many of those commercial C++ compilers rely upon the excellent EDG
front-end, and are hence nearly interchangeable from the "swallow the
source" perspective. This reliance upon EDG is good for programmers
who desire a standard-compliant compiler, but it decreases the number
of would-be /separate/ compiler front-ends out there.

-- 
Steven E. Harris        :: ········@raytheon.com
Raytheon                :: http://www.raytheon.com
From: Erann Gat
Subject: Re: Why lisp is growing
Date: 
Message-ID: <gat-1612021537210001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@LUX.speakeasy.org>, Steven E. Harris
<···@speakeasy.org> wrote:

> ······@schlund.de (Hannah Schroeter) writes:
> 
> > the commercial ones are many
> 
> Many of those commercial C++ compilers rely upon the excellent EDG
> front-end, and are hence nearly interchangeable from the "swallow the
> source" perspective. This reliance upon EDG is good for programmers
> who desire a standard-compliant compiler, but it decreases the number
> of would-be /separate/ compiler front-ends out there.

That's not quite true, since a licencee of the EDG front end can modify
it, so each separate licensee is potentially a separate contribution to
the C++ compiler front-end gene pool.

Still, 345 kloc for a parser?  Holy shit!  (And why in the world would a
front end have host and target dependencies?)

E.
From: Hannah Schroeter
Subject: Re: Why lisp is growing
Date: 
Message-ID: <atnjss$lps$1@c3po.schlund.de>
Hello!

Erann Gat <···@jpl.nasa.gov> wrote:
>[...]

>Still, 345 kloc for a parser?  Holy shit!

C++ syntax and semantics *are* complicated.

>(And why in the world would a
>front end have host and target dependencies?)

If you want to build up an IR which is already a little host/target
dependent...

E.g. constant literals (you have to do constant arithmetics using
the host resources, but target semantics!).

>E.

Kind regards,

Hannah.
From: Oleg
Subject: Why is CLL growing (was: Why lisp is growing)
Date: 
Message-ID: <at6cql$733$1@newsmaster.cc.columbia.edu>
Hi

In case people are interested in comparing USENET traffic of various 
comp.lang groups, there is a site that does just that. It appears that CLL 
traffic increased by a factor of 2 in a year. The traffic is measured by 
doing google groups querries from a perl script. I'm not sure how reliable 
the data is, since google frequently lies about the number of matches 
unless all results fit on one page. So if this can be worked around, and if 
cross-posted articles and spam are excluded, it would be interesting to 
look at the results. Here is the link: 
http://www.chez.com/prigaux/language-study/usenet-traffic-ranking/

Another interesting thing to look at would be the percentage of articles 
containing insulting words in various groups.

Oleg
From: Kenny Tilton
Subject: Re: Why is CLL growing (was: Why lisp is growing)
Date: 
Message-ID: <3DF6CBC5.40907@nyc.rr.com>
Oleg wrote:
> Another interesting thing to look at would be the percentage of articles 
> containing insulting words in various groups.

That's a fuckin' great idea!

:)

-- 

  kenny tilton
  clinisys, inc
  ---------------------------------------------------------------
""Well, I've wrestled with reality for thirty-five years, Doctor,
   and I'm happy to state I finally won out over it.""
                                                   Elwood P. Dowd
From: marcel haesok
Subject: Re: Why lisp is growing
Date: 
Message-ID: <lWyJ9.313575$QZ.46331@sccrnsc02>
"Marc Spitzer" <········@optonline.net> wrote in message
···················@bogomips.optonline.net...
> here we go
>
> 1: 2 new lisp vendors, corman and scl
> 2: more messages on cll and new faces showing up
> 3: growing number of open source projects for cl
> 4: UFFI, get n FFI's for the price of 1
> 5: several lisp gui's are available
> 6: new user's groups springing up, ok at least 1.
> 7: it is pure fun to work with
>
> I am sure I forgot some obvious stuff so please add
> it to the lisp, for fun and google.
>


I would add an item:
 8. CPU has become so powerful that the market is
      becomming ready for commercially viable AI products.
From: Oleg
Subject: Re: Why lisp is growing
Date: 
Message-ID: <at9mvr$bqa$1@newsmaster.cc.columbia.edu>
On the subject of why the popularity of Lisp is growing (if that's the 
case): perhaps this is because Lisp got a lot better than it was. My 
understanding is that Lisp used to be quite bad, and it made some 
improvements: lexical scoping instead of dynamic helped make it somewhat 
less error-prone, and sophisticated GC and compiled execution helped make 
some Lisp implementations reasonably efficient.

Oleg
From: Tim Bradshaw
Subject: Re: Why lisp is growing
Date: 
Message-ID: <ey3y96vfsym.fsf@cley.com>
* oleg inconnu wrote:
> On the subject of why the popularity of Lisp is growing (if that's the 
> case): perhaps this is because Lisp got a lot better than it was. My 
> understanding is that Lisp used to be quite bad, and it made some 
> improvements: lexical scoping instead of dynamic helped make it somewhat 
> less error-prone, and sophisticated GC and compiled execution helped make 
> some Lisp implementations reasonably efficient.

But these things happened at least 20 years ago...

--tim
From: Oleg
Subject: Re: Why lisp is growing
Date: 
Message-ID: <at9qli$e5b$1@newsmaster.cc.columbia.edu>
Tim Bradshaw wrote:

> * oleg inconnu wrote:
>> On the subject of why the popularity of Lisp is growing (if that's the
>> case): perhaps this is because Lisp got a lot better than it was. My
>> understanding is that Lisp used to be quite bad, and it made some
>> improvements: lexical scoping instead of dynamic helped make it somewhat
>> less error-prone, and sophisticated GC and compiled execution helped make
>> some Lisp implementations reasonably efficient.
> 
> But these things happened at least 20 years ago...
> 

I'll quote Graham 1996: "Until RECENTLY, most Lisp implementations have had 
bad garbage collectors". 

Oleg
From: Tim Bradshaw
Subject: Re: Why lisp is growing
Date: 
Message-ID: <ey3u1hjfk8p.fsf@cley.com>
* oleg inconnu wrote:

> I'll quote Graham 1996: "Until RECENTLY, most Lisp implementations have had 
> bad garbage collectors". 

I think his definition of `recently' is pretty loose, and also `most'
doesn't really count: it should only take one good implementation.
I'm unclear when stock-hardware lisps got good GCs, but I suspect that
LispM lisps had good ones in about 1985 (so, OK that's 17 years, sorry
I exaggerated).

--tim
From: Joe Marshall
Subject: Re: Why lisp is growing
Date: 
Message-ID: <smx3l3y9.fsf@ccs.neu.edu>
Tim Bradshaw <···@cley.com> writes:

> * oleg inconnu wrote:
> 
> > I'll quote Graham 1996: "Until RECENTLY, most Lisp implementations have had 
> > bad garbage collectors". 
> 
> I think his definition of `recently' is pretty loose, and also `most'
> doesn't really count: it should only take one good implementation.
> I'm unclear when stock-hardware lisps got good GCs, but I suspect that
> LispM lisps had good ones in about 1985 (so, OK that's 17 years, sorry
> I exaggerated).

The LMI Lambda had generational GC circa 1985 - 1987 (it existed in
1985, but it took a while to get find all the places in the microcode
that were a bit cavalier with pointers).  Symbolics had it a couple of
years earlier.  Patrick Sobalvarro did Lucid's around 1987-1988.
From: Duane Rettig
Subject: Re: Why lisp is growing
Date: 
Message-ID: <4u1hjp5y7.fsf@beta.franz.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Tim Bradshaw <···@cley.com> writes:
> 
> > * oleg inconnu wrote:
> > 
> > > I'll quote Graham 1996: "Until RECENTLY, most Lisp implementations have had 
> > > bad garbage collectors". 
> > 
> > I think his definition of `recently' is pretty loose, and also `most'
> > doesn't really count: it should only take one good implementation.
> > I'm unclear when stock-hardware lisps got good GCs, but I suspect that
> > LispM lisps had good ones in about 1985 (so, OK that's 17 years, sorry
> > I exaggerated).
> 
> The LMI Lambda had generational GC circa 1985 - 1987 (it existed in
> 1985, but it took a while to get find all the places in the microcode
> that were a bit cavalier with pointers).  Symbolics had it a couple of
> years earlier.  Patrick Sobalvarro did Lucid's around 1987-1988.

Allegro CL's generational gc was first committed to source file management
on 9/25/87.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Tim Daly, Jr.
Subject: Re: Why lisp is growing
Date: 
Message-ID: <m3smx1s7iv.fsf@www.tenkan.org>
Duane Rettig <·····@franz.com> writes:

> Allegro CL's generational gc was first committed to source file management
> on 9/25/87.
>

Does that mean that you've been able to preserve your revision history
in some easily accessible form for ~15 years?  Or did that involve
some archaeology?  In case there really is a revision control system
that's met your needs for over a decade, could you perhaps tell me
what it is? :)

-Tim
From: Duane Rettig
Subject: Re: Why lisp is growing
Date: 
Message-ID: <465txb61t.fsf@beta.franz.com>
···@tenkan.org (Tim Daly, Jr.) writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > Allegro CL's generational gc was first committed to source file management
> > on 9/25/87.
> >
> 
> Does that mean that you've been able to preserve your revision history
> in some easily accessible form for ~15 years?  Or did that involve
> some archaeology?  In case there really is a revision control system
> that's met your needs for over a decade, could you perhaps tell me
> what it is? :)

We had started in RCS, and that was (for the most part) upward compatible
with CVS.  I don't remember when we started using CVS, though.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Tim Bradshaw
Subject: Re: Why lisp is growing
Date: 
Message-ID: <ey365tw8u5d.fsf@cley.com>
* Tim Daly, wrote:
> Does that mean that you've been able to preserve your revision history
> in some easily accessible form for ~15 years?  Or did that involve
> some archaeology?  In case there really is a revision control system
> that's met your needs for over a decade, could you perhaps tell me
> what it is? :)

Why is this weird?  I have things that went into RCS in 1990 and are
now in CVS, with their histories intact.  A revision control system
that won't last a decade or more isn't a very good system.

--tim
From: Tim Daly, Jr.
Subject: Re: Why lisp is growing
Date: 
Message-ID: <wkfzsy56ex.fsf@tenkan.org>
Tim Bradshaw <···@cley.com> writes:

> * Tim Daly, wrote:
> > Does that mean that you've been able to preserve your revision history
> > in some easily accessible form for ~15 years?  
...
> Why is this weird?  I have things that went into RCS in 1990 and are
> now in CVS, with their histories intact.  A revision control system
> that won't last a decade or more isn't a very good system.

Hehe..  Yes, well, this _is_ a Lisp newsgroup.  As you might recall,
the rest of the world has a certain self-destructive preference for
broken technology with little pictures glued to it.  I should've
known that the answer would be RCS/CVS.  Just like Lisp, it's out
there, everybody kind of knows about it, but it's all too often
written off as archaic.

Thanks for the feedback, Tim and Duane.

-Tim
From: Lars Magne Ingebrigtsen
Subject: Re: Why lisp is growing
Date: 
Message-ID: <m3bs3mgdw8.fsf@quimbies.gnus.org>
···@tenkan.org (Tim Daly, Jr.) writes:

> I should've known that the answer would be RCS/CVS.  Just like Lisp,
> it's out there, everybody kind of knows about it, but it's all too
> often written off as archaic.

My guess is that the vast bulk of Unixish development is under CVS.
Certainly it's the major player when it comes to free software
development.  In that sense, it's not like Lisp at all.

-- 
(domestic pets only, the antidote for overdose, milk.)
   ·····@gnus.org * Lars Magne Ingebrigtsen
From: Tim Daly, Jr.
Subject: Re: Why lisp is growing
Date: 
Message-ID: <wk3cox6dds.fsf@tenkan.org>
Lars Magne Ingebrigtsen <·····@gnus.org> writes:

> ···@tenkan.org (Tim Daly, Jr.) writes:
> 
> > I should've known that the answer would be RCS/CVS.  Just like Lisp,
> > it's out there, everybody kind of knows about it, but it's all too
> > often written off as archaic.
> 
> My guess is that the vast bulk of Unixish development is under CVS.
> Certainly it's the major player when it comes to free software
> development.  In that sense, it's not like Lisp at all.

I think that you're trying to say that CVS is popular, and Lisp is
not, hence they are dissimilar.  Am I right?

That may or may not be true.  To clarify, I wanted only to say that,
in my (admittedly brief) career, I've often had to code in gimpy
language X, when I know Lisp would be better, and likewise check my
code into gimpy revision control system Y, when I know CVS would be
better.  I figured it might be a common experience.

Beyond that, I don't happen to think that CVS and Lisp have much in
common either.

-Tim
From: Paul Dietz
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DFE3073.74D0D42E@motorola.com>
"Tim Daly, Jr." wrote:

> Beyond that, I don't happen to think that CVS and Lisp have much in
> common either.


PRCS ( http://prcs.sourceforge.net/ ) seems lispier to me.

	Paul
From: Tim Daly, Jr.
Subject: Re: Why lisp is growing
Date: 
Message-ID: <wkvg1t8zmn.fsf@tenkan.org>
Paul Dietz <············@motorola.com> writes:

> "Tim Daly, Jr." wrote:
> 
> > Beyond that, I don't happen to think that CVS and Lisp have much in
> > common either.
> 
> 
> PRCS ( http://prcs.sourceforge.net/ ) seems lispier to me.
> 
> 	Paul


Why do you say that?  I've not worked with it, but I read some of the
comparisons to CVS on their site.  It sounds basically the same, minus
some features.

-Tim
From: Paul Dietz
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DFE457C.9E68AC1E@motorola.com>
"Tim Daly, Jr." wrote:

> > PRCS ( http://prcs.sourceforge.net/ ) seems lispier to me.

> Why do you say that?  I've not worked with it, but I read some of the
> comparisons to CVS on their site.  It sounds basically the same, minus
> some features.

For the superficial reason that it uses S expressions.

	Paul
From: Christopher C. Stacy
Subject: Re: Why lisp is growing
Date: 
Message-ID: <uel8ngok7.fsf@dtpq.com>
>>>>> On 12 Dec 2002 09:14:22 -0500, Joe Marshall ("Joe") writes:

 Joe> Tim Bradshaw <···@cley.com> writes:
 >> * oleg inconnu wrote:
 >> 
 >> > I'll quote Graham 1996: "Until RECENTLY, most Lisp implementations have had 
 >> > bad garbage collectors". 
 >> 
 >> I think his definition of `recently' is pretty loose, and also `most'
 >> doesn't really count: it should only take one good implementation.
 >> I'm unclear when stock-hardware lisps got good GCs, but I suspect that
 >> LispM lisps had good ones in about 1985 (so, OK that's 17 years, sorry
 >> I exaggerated).

 Joe> The LMI Lambda had generational GC circa 1985 - 1987 (it existed in
 Joe> 1985, but it took a while to get find all the places in the microcode
 Joe> that were a bit cavalier with pointers).  Symbolics had it a couple of
 Joe> years earlier.  Patrick Sobalvarro did Lucid's around 1987-1988.

In 1980, we used Lisp Machines running for weeks at a time.
The garbage collector was "good enough" for that.
From: Tom Lord
Subject: Re: Why lisp is growing
Date: 
Message-ID: <uvgqtv3li97i57@corp.supernews.com>
	>> On the subject of why the popularity of Lisp is growing (if
	>> that's the case): perhaps this is because Lisp got a lot
	>> better than it was. My understanding is that Lisp used to
	>> be quite bad, and it made some improvements: lexical
	>> scoping instead of dynamic helped make it somewhat less
	>> error-prone, and sophisticated GC and compiled execution
	>> helped make some Lisp implementations reasonably efficient.

	> But these things happened at least 20 years ago...

Western capitalism has been highly oppressive, highly corrupt, and
highly irrational about engineering for longer than that.  Good
lisp-based engineering optimizes rationally, but not in ways commonly
favored by CFOs.  Those factors have kept lisp (including scheme)
down.

Some notable recent commercial successes of lisp have been in areas
that were sufficiently complicated that the probability that only a
lisp hacker could achieve results was high.

Lisp may be gaining a little, but "engineering by ignorant management
platitudes" and "engineering by quarterly results" is still (all too)
popular.

Good lisp hackers: go get your MBA, maybe?

-t
From: Kaz Kylheku
Subject: Re: Why lisp is growing
Date: 
Message-ID: <cf333042.0212121326.2122bf50@posting.google.com>
····@emf.emf.net (Tom Lord) wrote in message news:<··············@corp.supernews.com>...
> >> On the subject of why the popularity of Lisp is growing (if
> 	>> that's the case): perhaps this is because Lisp got a lot
> 	>> better than it was. My understanding is that Lisp used to
> 	>> be quite bad, and it made some improvements: lexical
> 	>> scoping instead of dynamic helped make it somewhat less
> 	>> error-prone, and sophisticated GC and compiled execution
> 	>> helped make some Lisp implementations reasonably efficient.
>  
> 	> But these things happened at least 20 years ago...
> 
> Western capitalism has been highly oppressive, highly corrupt, and
> highly irrational about engineering for longer than that.  Good
> lisp-based engineering optimizes rationally, but not in ways commonly
> favored by CFOs.  Those factors have kept lisp (including scheme)
> down.

Nonsense. If you are optimizing in a way that does not jive with
capitalism, then you aren't doing it rationally. Ultimately,
programming decisions, in a commercial setting, are traceable to
business cases. Every competent programmer, Lisp or otherwise,
understands this.

> Some notable recent commercial successes of lisp have been in areas
> that were sufficiently complicated that the probability that only a
> lisp hacker could achieve results was high.
> 
> Lisp may be gaining a little, but "engineering by ignorant management
> platitudes" and "engineering by quarterly results" is still (all too)
> popular.

Stupid management is not the same thing as capitalism. It's just
stupidity in a capitalist setting. Stupid management wastes
opportunities and wastes profit in ways that the smart CFO would
certainly not approve of, if they had greater visibility to him or
her.

The problem with stupid management is that it's carried out by idiots
who are too concerned with politics and personalities, rather than
getting the job done as fast and as best as possible. It's a hidden
form of socialism that rots the underbelly of capitalism. People must
``be a team player'' and ``feel good'' and all that collectivist crap.

The manager's true job is to allocate resources and remove obstacles
(which includes people) that stand in the way of the most productive
individuals, not to create useless, time-wasting protocols and
rituals.

It's collectivism that keeps bad programming tools in use! We wouldn't
want the idiots to feel bad or learn anything new, so let's stick with
the dumb, common-denominator crap that everyone already understands.
The goal is not to produce, but to live harmoniously with your fellow
man in the hours 9 to 5, whose feelings would be hurt if he were
outperformed by someone using a different programming language.
Moreover, the workers aren't individuals. They are just
interchangeable warm bodies. One programmer in language X is just as
good as another programmer of language X; every human is the same.
What is all this, if not a kind of micro-socialism within a
corporation?

But, ironically, when we actually make a product and market it, of
course we will insist that our product is different, and better than
everything else, demanding that customers apply a better reasoning
when selecting our product than what we did when we selected our
tools.
From: Oleg
Subject: Re: Why lisp is growing
Date: 
Message-ID: <atb0qn$be6$1@newsmaster.cc.columbia.edu>
Kaz Kylheku wrote:

> The problem with stupid management is that it's carried out by idiots
> who are too concerned with politics and personalities, rather than
> getting the job done as fast and as best as possible. It's a hidden
> form of socialism that rots the underbelly of capitalism. People must
> ``be a team player'' and ``feel good'' and all that collectivist crap.

This is quite off-topic, but I must say I couldn't agree more. Socialism is 
like capitalism when the whole country is one big [inefficient] 
corporation.

Oleg
From: Tom Lord
Subject: Re: Why lisp is growing
Date: 
Message-ID: <uvi7kab7d7d71e@corp.supernews.com>
	> Western capitalism has been highly oppressive, highly
	> corrupt, and highly irrational about engineering for longer
	> than that.  Good lisp-based engineering optimizes
	> rationally, but not in ways commonly favored by CFOs.  Those
	> factors have kept lisp (including scheme) down.

	Nonsense. If you are optimizing in a way that does not jive
	with capitalism,

You misunderstand.  Capitalism itself isn't the problem.  Capitalism
as it has been operated around here is.

	Stupid management is not the same thing as capitalism. 

I can see how things got confused.

By "Western capitalism" I meant our particular capitalist
institutions -- not the abstract economic strategy of capitalism.

Better (new and/or improved) capitalist institutions will do a better
job, and that probably includes applying lisp and lisp talent in more
situations.

If it were capitalism itself I was criticizing, I probably wouldn't 
have suggested getting MBAs (though maybe you thought I was advocating
a take-down from the inside :-).

People on usenet are often really touchy about perceived or actual
criticisms of capitalism and very reflexive about their replies.
That's sad.  It's intellectually stifling.

-t
From: Paolo Amoroso
Subject: Re: Why lisp is growing
Date: 
Message-ID: <JJv4Pc2EeN9QrdmK9TnYZJSdDYK8@4ax.com>
On Thu, 12 Dec 2002 10:58:07 -0000, ····@emf.emf.net (Tom Lord) wrote:

> Good lisp hackers: go get your MBA, maybe?

Can't do it now, too busy with my MFA :)


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
From: Rand Sobriquet
Subject: Re: Why lisp is growing
Date: 
Message-ID: <1e249696.0212130343.416be846@posting.google.com>
> On the subject of why the popularity of Lisp is growing (if that's the 
> case): perhaps this is because Lisp got a lot better than it was. My 
> understanding is that Lisp used to be quite bad, and it made some 
> improvements: lexical scoping instead of dynamic helped make it somewhat 
> less error-prone, and sophisticated GC and compiled execution helped make 
> some Lisp implementations reasonably efficient.
> 
> Oleg

Oleg,

May I ask what language(s) you use for development?  From your message
it seems that you are comparing (implicitly) CL to a language that you
are currently using.

I am working on a commercial project whose implementation can be
nicely described with CLOS.  Later, however, I am interested in doing
a small, numerical-extensive project: I would not mind learning a new
language if there's some advantages in speed.

Thanks,
Rand
From: Oleg
Subject: Re: Why lisp is growing
Date: 
Message-ID: <atdafq$m2l$1@newsmaster.cc.columbia.edu>
Rand Sobriquet wrote:

>> On the subject of why the popularity of Lisp is growing (if that's the
>> case): perhaps this is because Lisp got a lot better than it was. My
>> understanding is that Lisp used to be quite bad, and it made some
>> improvements: lexical scoping instead of dynamic helped make it somewhat
>> less error-prone, and sophisticated GC and compiled execution helped make
>> some Lisp implementations reasonably efficient.
>> 
>> Oleg
> 
> Oleg,
> 
> May I ask what language(s) you use for development?  

Knowing full well how intolerant some CLL regulars are to any language that 
is not Lisp, I must say "you may not" to avoid another language war of the 
week. Besides, what language a man uses for development is his own 
business. It's in the Consitution.

Oleg

> From your message
> it seems that you are comparing (implicitly) CL to a language that you
> are currently using.
> 
> I am working on a commercial project whose implementation can be
> nicely described with CLOS.  Later, however, I am interested in doing
> a small, numerical-extensive project: I would not mind learning a new
> language if there's some advantages in speed.
> 
> Thanks,
> Rand
From: Joe Marshall
Subject: Re: Why lisp is growing
Date: 
Message-ID: <pts5abk7.fsf@ccs.neu.edu>
Oleg <············@myrealbox.com> writes:

> Knowing full well how intolerant some CLL regulars are to any language that 
> is not Lisp, I must say "you may not" to avoid another language war of the 
> week. Besides, what language a man uses for development is his own 
> business. It's in the Consitution.

Show a little spine!

There are many very good reasons not to use CL for development.  Not
all of them are technical, but some are.  You may even use another
language because you prefer to.

Of course if you find a different language to be universally better
than Lisp, why would you hang out here?
From: Oleg
Subject: Re: Why lisp is growing
Date: 
Message-ID: <atdk1r$sr9$1@newsmaster.cc.columbia.edu>
Joe Marshall wrote:

> Oleg <············@myrealbox.com> writes:
> 
>> Knowing full well how intolerant some CLL regulars are to any language
>> that is not Lisp, I must say "you may not" to avoid another language war
>> of the week. Besides, what language a man uses for development is his own
>> business. It's in the Consitution.
> 
> Show a little spine!

s/spine/naivete
 
> There are many very good reasons not to use CL for development.  Not
> all of them are technical, but some are.  You may even use another
> language because you prefer to.
> 
> Of course if you find a different language to be universally better
> than Lisp, why would you hang out here?
From: Marc Spitzer
Subject: Re: Why lisp is growing
Date: 
Message-ID: <861y4l35hc.fsf@bogomips.optonline.net>
Oleg <············@myrealbox.com> writes:

> Joe Marshall wrote:
> 
> > Oleg <············@myrealbox.com> writes:
> > 
> >> Knowing full well how intolerant some CLL regulars are to any language
> >> that is not Lisp, I must say "you may not" to avoid another language war
> >> of the week. Besides, what language a man uses for development is his own
> >> business. It's in the Consitution.
> > 
> > Show a little spine!
> 
> s/spine/naivete

no 
spine == spine 
naivete == honest ignorance.  Its "cute" because it does not last long
and we all can remember with fondness when we were starting out and
made the same kinds of mistakes, when you correct it it goes away.
When it does not go away after repeated prompts they are moved into the
stupid and/or asshole catagory(if they do not go away).

marc
From: Oleg
Subject: Re: Why lisp is growing
Date: 
Message-ID: <atdom2$2nb$1@newsmaster.cc.columbia.edu>
Marc Spitzer wrote:

> Oleg <············@myrealbox.com> writes:
> 
>> Joe Marshall wrote:
>> 
>> > Oleg <············@myrealbox.com> writes:
>> > 
>> >> Knowing full well how intolerant some CLL regulars are to any language
>> >> that is not Lisp, I must say "you may not" to avoid another language
>> >> war of the week. Besides, what language a man uses for development is
>> >> his own business. It's in the Consitution.
>> > 
>> > Show a little spine!
>> 
>> s/spine/naivete
> 
> no
> spine == spine
> naivete == honest ignorance.  Its "cute" because it does not last long
> and we all can remember with fondness when we were starting out and
> made the same kinds of mistakes, when you correct it it goes away.
> When it does not go away after repeated prompts they are moved into the
> stupid and/or asshole catagory(if they do not go away).
> 

I know what "naivete" means, but thank you anyway.

Oleg
From: Marc Spitzer
Subject: Re: Why lisp is growing
Date: 
Message-ID: <8665tx3co5.fsf@bogomips.optonline.net>
Oleg <············@myrealbox.com> writes:

> Rand Sobriquet wrote:
 
> > May I ask what language(s) you use for development?  
> 
> Knowing full well how intolerant some CLL regulars are to any language that 
> is not Lisp, I must say "you may not" to avoid another language war of the 
> week. Besides, what language a man uses for development is his own 
> business. It's in the Consitution.
> 
> Oleg

I have never seen anyone flamed for saying this is what I use to pay
the rent.  I have seen people ask why CL was not used, but that is a
fair question here.

I have seen people get flamed for saying lisp sucks because it is not
like what I already know and you should fix that.  And for a comment
like that they deserve it, when voiced in CLL.

marc
From: Oleg
Subject: Re: Why lisp is growing
Date: 
Message-ID: <atdf29$p11$1@newsmaster.cc.columbia.edu>
Marc Spitzer wrote:

> 
> I have never seen anyone flamed for saying this is what I use to pay
> the rent.  I have seen people ask why CL was not used, but that is a
> fair question here.

[...]

I'm in grad school. I use American taxpayers' money to pay the rent :)

Oleg
From: Steve
Subject: Re: Why lisp is growing
Date: 
Message-ID: <6f8cb8c9.0212170929.6dcf62da@posting.google.com>
Marc Spitzer <········@optonline.net> wrote in message news:<··············@bogomips.optonline.net>...
> here we go
> 
> 1: 2 new lisp vendors, corman and scl
> 2: more messages on cll and new faces showing up
> 3: growing number of open source projects for cl
> 4: UFFI, get n FFI's for the price of 1
> 5: several lisp gui's are available
> 6: new user's groups springing up, ok at least 1.
> 7: it is pure fun to work with
> 
> I am sure I forgot some obvious stuff so please add
> it to the lisp, for fun and google.
> 
> marc

I would look for these two indicators to forecast a lisp comeback:

1. Typing "lisp" in as a keyword in the search engine at your favorite
tech job site and having a healthy number of results returned to you.

2. Having any tech savy person being able to list a number of popular
applications written in lisp and having the people listening being
already familiar with the names of those apps.

Anything else is just the positive thinking of the faithful

Steve
From: Pascal Costanza
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DFF65DD.3060503@web.de>
Steve wrote:
> Marc Spitzer <········@optonline.net> wrote in message news:<··············@bogomips.optonline.net>...
> 
>>here we go
>>
>>1: 2 new lisp vendors, corman and scl
>>2: more messages on cll and new faces showing up
>>3: growing number of open source projects for cl
>>4: UFFI, get n FFI's for the price of 1
>>5: several lisp gui's are available
>>6: new user's groups springing up, ok at least 1.
>>7: it is pure fun to work with
>>
>>I am sure I forgot some obvious stuff so please add
>>it to the lisp, for fun and google.
>>
>>marc
> 
> 
> I would look for these two indicators to forecast a lisp comeback:
> 
> 1. Typing "lisp" in as a keyword in the search engine at your favorite
> tech job site and having a healthy number of results returned to you.
> 
> 2. Having any tech savy person being able to list a number of popular
> applications written in lisp and having the people listening being
> already familiar with the names of those apps.

These indicators would in fact tell you that Lisp is already popular 
again. What's the point in forecasting something that has already happened?

> Anything else is just the positive thinking of the faithful

"If you are really interested in change, then optimism is the best 
strategy."
- Joel Kramer and Diana Alstad


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kenny Tilton
Subject: Re: Why lisp is growing
Date: 
Message-ID: <3DFF7553.6090209@nyc.rr.com>
Steve wrote:
> Marc Spitzer <········@optonline.net> wrote in message news:<··············@bogomips.optonline.net>...
> 
>>here we go
>>
>>1: 2 new lisp vendors, corman and scl
>>2: more messages on cll and new faces showing up
>>3: growing number of open source projects for cl
>>4: UFFI, get n FFI's for the price of 1
>>5: several lisp gui's are available
>>6: new user's groups springing up, ok at least 1.
>>7: it is pure fun to work with
>>
>>I am sure I forgot some obvious stuff so please add
>>it to the lisp, for fun and google.
>>
>>marc
> 
> 
> I would look for these two indicators to forecast a lisp comeback:
> 
> 1. Typing "lisp" in as a keyword in the search engine at your favorite
> tech job site and having a healthy number of results returned to you.

i do not think "forecast" means what you think it means. :) that's /is/ 
the comeback.

> 
> 2. Having any tech savy person being able to list a number of popular
> applications written in lisp and having the people listening being
> already familiar with the names of those apps.
> 
> Anything else is just the positive thinking of the faithful

well, yes, exactly, based on positive signs we are discussing. i think 
you missed the spirit of the discussion, and are starting a new thread, 
viz, How we'll know lisp is kicking ass and taking numbers.

10. O'Reilly publishes a Lisp book.

9. Someone writes Comp Sci 101 based on Lisp

8. Graham abandons Arc

7. Larry Wall switches to Lisp

6. Norvig does the 3rd edition of PAIP in Lisp

5. Guido switches to Lisp

4. XML is dumped in favor of sexprs

3. Grace Hopper switches to Lisp

2. MS does Visual Lisp

1. NASA puts /Eran/ in space and....

... Erik makes the cover of Wired.

:)



-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Cells let us walk, talk, think, make love and realize
  the bath water is cold." -- Lorraine Lee Cudmore
From: Michael Schuerig
Subject: Re: Why lisp is growing
Date: 
Message-ID: <atnuti$362$07$1@news.t-online.com>
Kenny Tilton wrote:

> 6. Norvig does the 3rd edition of PAIP in Lisp

There isn't a 2nd edition. You're thinking of AIMA (2nd edition due 
every day now)?

Michael

-- 
Michael Schuerig                  If at first you don't succeed...
···············@acm.org           try, try again.
http://www.schuerig.de/michael/   --Jerome Morrow, "Gattaca"
From: Paolo Amoroso
Subject: Re: Why lisp is growing
Date: 
Message-ID: <SXYAPpDQd=DUjGJ1pAHIKwzW5Z1n@4ax.com>
On Tue, 17 Dec 2002 19:02:14 GMT, Kenny Tilton <·······@nyc.rr.com> wrote:

> 4. XML is dumped in favor of sexprs

Make sexp not war!


> 3. Grace Hopper switches to Lisp

R.I.P. Maybe her ghost.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
From: Richard Fateman
Subject: Re: Why lisp is growing / pay for code?
Date: 
Message-ID: <3E00B88C.4090300@cs.berkeley.edu>
visit www.rentacoder.com    or www.myskool.com and you
will be disappointed.  Each offers help for cheaters
(at a price) to program up something, perhaps for
homework.  (Or more legit purposes perhaps....)

But they don't offer help on lisp.

Maybe some underemployed soul should register and
see if he/she gets any bids.  that would be evidence
that lisp is growing.

Also one could then send the people who ask here for
the solution to a homework problem off to a site
where the solution could be sold to them instead of
blurting it out free / inundating them with
subtle stuff to confuse them and their instructors.


RJF
From: Oleg
Subject: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <atql2v$5v1$1@newsmaster.cc.columbia.edu>
Paolo Amoroso wrote:

> 
>> 4. XML is dumped in favor of sexprs
> 
> Make sexp not war!

\documentclass[letterpaper]{article}
\bibliographystyle{unsrt}

\begin{document}
\begin{S_exp_war}
Lispers like to flame XML. Now, what is wrong with XML that
is {\em not} "wrong" with \LaTeX and HTML?
\end{S_exp_war}
\end{document}
From: Tim Bradshaw
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ey3bs3jnjjw.fsf@cley.com>
* oleg inconnu wrote:

> \documentclass[letterpaper]{article}
> \bibliographystyle{unsrt}

> \begin{document}
> \begin{S_exp_war}
> Lispers like to flame XML. Now, what is wrong with XML that
> is {\em not} "wrong" with \LaTeX and HTML?
> \end{S_exp_war}
> \end{document}

HTML was never put forward as some kind of Universal Markup Language
for data interchange, neither was TeX/LaTeX; so there's no real
comparison.  TeX has the advantage in its field that for certain kinds
of documents - things containing a lot of maths - it's *very* easy to
type compared to anything else I'm aware of, but the disadvantage that
it's a hideous macro language designed by someone with lots of strange
ideas.

--tim
From: Kaz Kylheku
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <cf333042.0212190935.231ddfb6@posting.google.com>
Tim Bradshaw <···@cley.com> wrote in message news:<···············@cley.com>...
> comparison.  TeX has the advantage in its field that for certain kinds
> of documents - things containing a lot of maths - it's *very* easy to
> type compared to anything else I'm aware of, but the disadvantage that
> it's a hideous macro language designed by someone with lots of strange
> ideas.

To be completely fair, it's quite likely that if Knuth were to design
a typesetting language from scratch today, it would be a lot better.
From: Tim Bradshaw
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ey31y4dnakn.fsf@cley.com>
* Kaz Kylheku wrote:

> To be completely fair, it's quite likely that if Knuth were to design
> a typesetting language from scratch today, it would be a lot better.

It would be interesting to see a language that was a lot better *for
typing maths* (I agree that TeX, as a language, is really horrible).
I've never seen anything that comes close, except possibly eqn (which
I never used enough to know).  A long time ago I spent some time
typing in mathematical monographs to earn some money, and it got to
the point where I could just look at a (neatly) handwritten set of
displayed equations, and just type it in, and it would almost always
work.  TeX is just incredibly good for that purpose.

--tim
From: Charlton Wilbur
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <87bs3holeq.fsf@mithril.chromatico.net>
>>>>> "TB" == Tim Bradshaw <···@cley.com> writes:

    TB> It would be interesting to see a language that was a lot
    TB> better *for typing maths* (I agree that TeX, as a language, is
    TB> really horrible).  I've never seen anything that comes close,
    TB> except possibly eqn (which I never used enough to know).  A
    TB> long time ago I spent some time typing in mathematical
    TB> monographs to earn some money, and it got to the point where I
    TB> could just look at a (neatly) handwritten set of displayed
    TB> equations, and just type it in, and it would almost always
    TB> work.  TeX is just incredibly good for that purpose.

Also, once you're working at the level of (say) LaTeX, much of the
ugliness of the language is hidden, and what's left is just quirks of
syntax.  LaTeX does 80% of what I want right out of the box, and a few
ventures into deeper layers has brought that number closer to 95%.
Given that Word hovers around 45%, that's a win.  And besides, delving
into LaTeX and TeX innards is a beautiful procrastination exercise
when I should be working on writing something else.

Charlton
From: Kaz Kylheku
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <cf333042.0212191447.54898b3e@posting.google.com>
Charlton Wilbur <·······@mithril.chromatico.net> wrote in message news:<··············@mithril.chromatico.net>...
> Also, once you're working at the level of (say) LaTeX, much of the
> ugliness of the language is hidden, and what's left is just quirks of
> syntax.

That is false; quirks of semantics pervade LaTeX, because of the shaky
foundation on which it is built. Why can't you use certain constructs
in certain contexts? It's just not hygienic.

In a paragraph I can use \verb|...| but in a section heading or
footnote, I think, I must switch to {\tt ...}. Why? Who cares?

At some point you stop caring and just crank out the document.
From: John Williams
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <87vg1pw377.fsf@heisenberg.aston.ac.uk>
I must say that I think we can fairly easily have the beautiful output
from Tex with the notational elegance of using s-exp.

I am starting to use an s-exp notation to prepare technical content to
support lectures that I give. The content is presented online using a
common lisp server and the obvious techniques for such things but can
be just as trivially mapped to latex to produce printed documents.

As an example the equations (which are certainly one of the real wins
of using LaTex) can be entered as strings using the Latex notation -
this is obviously transparently passed through to produce Latex and
printed output. A preprocessing stage creates a gif image for each
equation when it the documents are to be used online.

A notation like

((section :title "my section title")
 (p "A paragraph with some inline math: " ((math :latex t) "|E|") 
    " followed by an equation and a figure")

 (equation "\\[E=\\int_{-\\infty}^{\\infty}|g(t)|^2\\,dt\\]")

 ((figure :caption 
  "The relationship between the fundamental frequency of a signal 
   and its bit rate")
  ((graphic :mag 0.7) "fundamental.eps")))

is rendered obviously. 

Such is the power of s-exp with common lisp processing that it appears
to be pretty easy to add new output and indeed input formats. I
already have students entering their own articles using structured
text which is converted to s-exp for subsequent processing and
display.

I would guess I am not the first to attempt this - it might be
interesting to hear of others experience in relation to using s-exp
in this way. Is this something we should try and put into a library -
say in CLOCC?

>>>>> "Kaz" == Kaz Kylheku <···@ashi.footprints.net> writes:

    Kaz> Tim Bradshaw <···@cley.com> wrote in message
    Kaz> news:<···············@cley.com>...
    >> comparison.  TeX has the advantage in its field that for
    >> certain kinds of documents - things containing a lot of maths -
    >> it's *very* easy to type compared to anything else I'm aware
    >> of, but the disadvantage that it's a hideous macro language
    >> designed by someone with lots of strange ideas.

    Kaz> To be completely fair, it's quite likely that if Knuth were
    Kaz> to design a typesetting language from scratch today, it would
    Kaz> be a lot better.

-- 
Dr. John A.R. Williams    | http://www.aston.ac.uk/~willijar
Photonics Research Group  | http://www.ee.aston.ac.uk/research/photonics
Electronic Engineering    | http://www.ee.aston.ac.uk/
Aston University          | http://www.aston.ac.uk

	
From: Raymond Toy
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <4nk7i4mxij.fsf@edgedsp4.rtp.ericsson.se>
>>>>> "JW" == John Williams <··············@aston.ac.uk> writes:

    JW> A notation like

    JW> ((section :title "my section title")
    JW>  (p "A paragraph with some inline math: " ((math :latex t) "|E|") 
    JW>     " followed by an equation and a figure")

    JW>  (equation "\\[E=\\int_{-\\infty}^{\\infty}|g(t)|^2\\,dt\\]")

    JW>  ((figure :caption 
    JW>   "The relationship between the fundamental frequency of a signal 
    JW>    and its bit rate")
    JW>   ((graphic :mag 0.7) "fundamental.eps")))

    JW> is rendered obviously. 

As a user I would hate this.  One of the nice things about TeX was
that simple text paragraphs are rendered as paragraphs.  No special
nesting rules, no markers, just the obvious blank line.  No need to
properly enclose/nest sections either---just a simple \section{title}.
Makes it easy to move text around too.

The parts for math, equations, and figures are ok, though.

Ray
From: Adam Warner
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <pan.2002.12.30.12.58.16.528946@consulting.net.nz>
Hi John Williams,

> I would guess I am not the first to attempt this - it might be
> interesting to hear of others experience in relation to using s-exp in
> this way.

Check out what I've been creating over the last few months. I finally got
my new site up three hours ago: https://macrology.co.nz/

I only have one example of the S-expression source available so far at
https://macrology.co.nz/?source (this is the source that generated the
front page and companion PDF).

Regards,
Adam

Note: I have discovered that MSIE 6.0 rendering doesn't markup verbatim
and quotation correctly. I'm not sure what the cause is yet but they look
fine using Mozilla 1.2.
From: John Williams
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <874r8ts466.fsf@heisenberg.aston.ac.uk>
Adam,

Your approach is very similar to what I am working on. I am using a
slightly different interpretation of the s-exp, with the symbol
representing the element and the attributes together in the first
element of the list. 

To answer those who mentioned that typing in s-exp is more tedious
than entering LaTeX, I agree. Therefore I have a structured-text
element which takes formatted text in a syntax similar to that used by
the structured text parser in Python (the language) and automatically
expands it into markup s-exp. Bulk Paragraphs of text are most easily
entered in this way.

Indeed when I take user input (e.g. student written articles) from the
website I take it as structured text, with no html tags allowed. The
structured text syntax is flexible enough for users to refer to
images, crossrefs etc in their articles as well as providing basic
formatting and structure and the syntax is expandible to suit the
particular application.

Therefore, like Adam, I think we can easily use s-exp as an input
language, as an intermediate format for processing and for conversion
to output to whatever other formats we want to use (basically html,
plain text, or LaTeX to produce pdf).


>>>>> "Adam" == Adam Warner <······@consulting.net.nz> writes:

    Adam> Hi John Williams,
    >> I would guess I am not the first to attempt this - it might be
    >> interesting to hear of others experience in relation to using
    >> s-exp in this way.

    Adam> Check out what I've been creating over the last few
    Adam> months. I finally got my new site up three hours ago:
    Adam> https://macrology.co.nz/

    Adam> I only have one example of the S-expression source available
    Adam> so far at https://macrology.co.nz/?source (this is the
    Adam> source that generated the front page and companion PDF).

    Adam> Regards, Adam

    Adam> Note: I have discovered that MSIE 6.0 rendering doesn't
    Adam> markup verbatim and quotation correctly. I'm not sure what
    Adam> the cause is yet but they look fine using Mozilla 1.2.

-- 
John Williams
	
From: Erik Naggum
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <3249243947379622@naggum.no>
* Oleg <············@myrealbox.com>
| \documentclass[letterpaper]{article}
| \bibliographystyle{unsrt}
| 
| \begin{document}
| \begin{S_exp_war}
| Lispers like to flame XML. Now, what is wrong with XML that
| is {\em not} "wrong" with \LaTeX and HTML?
| \end{S_exp_war}
| \end{document}

  If you tried to build a huge information infrastructure on top of
  LaTeX, it would probably be even worse than XML.

  SGML was a good idea as long as it was strictly confined to marking
  up a certain class of documents, whose processing applications were
  clearly restricted in their design.  Making it a meta-language, not
  just an application-specific input language, was not the mistake.
  The mistake that has cost the world billions of dollars was to let
  it escape its confines.  XML is everything that is bad about using
  SGML in the wrong places for the wrong purposes.  It is, in short,
  the worst braindamage that has hit the computing world in a very
  long time.

  I still use SGML to produce documentation.  I dislike HTML and the
  incredible abuse it has seen.  I positively /detest/ XML and the
  disgusting mess it has introduced to the world.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: thelifter
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <b295356a.0212200448.5e455c3e@posting.google.com>
Erik Naggum <····@naggum.no> wrote in message news:<················@naggum.no>...
> 
>   it escape its confines.  XML is everything that is bad about using
>   SGML in the wrong places for the wrong purposes.  It is, in short,
>   the worst braindamage that has hit the computing world in a very
>   long time.

I don't understand your criticism of XML. Basically XML is just
another way of writing S-expr or Trees or whatever you want to call
it. It's more verbose, ok, but I wouldn't consider this to be so a
great disadvantage. I use XML on a daily basis and think it is a
simple and intelligent way to represent data. I would like to hear why
you think it is so bad, can you be more specific please? And how would
you improve on it? For example how would you save a Text document from
a wordprocessor with layout informations if you didn't use XML(or
something equivalent)?
From: Kaz Kylheku
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <cf333042.0212201140.66c2df58@posting.google.com>
·········@gmx.net (thelifter) wrote in message news:<····························@posting.google.com>...
> Erik Naggum <····@naggum.no> wrote in message news:<················@naggum.no>...
> > 
> >   it escape its confines.  XML is everything that is bad about using
> >   SGML in the wrong places for the wrong purposes.  It is, in short,
> >   the worst braindamage that has hit the computing world in a very
> >   long time.
> 
> I don't understand your criticism of XML. Basically XML is just
> another way of writing S-expr or Trees or whatever you want to call

Lisp expressions are richly typed. How do you write a bitvector in
XML? A complex number? A rational number? How do you distinguish
between a symbol and a string? What about circularity? How do you
write the following in XML?

   #1=(a b #*0111(d e #1#) #c(2 3/2))

In XML, if I have <foo>1234</foo>, what is 1234? Is it a character
string of four digits? Or is it an integer? Where are the semantics?

> it. It's more verbose, ok, but I wouldn't consider this to be so a
> great disadvantage.

It is not only more verbose, but substantially harder to parse.
Verifying that XML has correct read syntax requires symbol table
lookup, so that tags can be matched. Verifying that Lisp has correct
read syntax requires only simple lexical pattern matching, combined
with the balancing of parentheses. This means that using XML in small
embedded platforms introduces time and space overheads. It means that
on a fast network where communication becomes CPU-bound, XML can
introduce a visible throughput hit.

> I use XML on a daily basis and think it is a
> simple and intelligent way to represent data.

Nothing which contains so much redundancies and superfluous crud can
possibly be considered intelligent---unless you believe that anyone
who works with computers and can come up with a working data
representation is intelligent. He or she may be intelligent among the
general population, but not necessarily intelligent within the
relevant peer group.

> I would like to hear why
> you think it is so bad, can you be more specific please? And how would
> you improve on it?

I would toss it in the trash.

> For example how would you save a Text document from
> a wordprocessor with layout informations if you didn't use XML(or
> something equivalent)?

I would use Lisp forms.
From: Tim Haynes
Subject: Re: S-exp vs XML, HTML, LaTeX
Date: 
Message-ID: <86hed8cike.fsf@potato.vegetable.org.uk>
···@ashi.footprints.net (Kaz Kylheku) writes:

[snip]
> In XML, if I have <foo>1234</foo>, what is 1234? Is it a character string
> of four digits? Or is it an integer? Where are the semantics?

While I'm not attempting to make any defence of XML here, there may be
*some* meaning imposed by external context: I tentatively suggest the
viewpoint `if you're dealing with SVG, "these" things are numeric', fsvo
"these".
It's still not an integral feature of the document at-hand, of course.

>> For example how would you save a Text document from a wordprocessor with
>> layout informations if you didn't use XML(or something equivalent)?
>
> I would use Lisp forms.

After the last couple of days' experiments, I'm tempted to agree.

Background: writing a weblog-style system-news announcement system.
Data entered from an html textarea, stored in $RDBMS as-is.
Have a CGI (written in ruby, just 'cos), designed to spew output in
?format=(xml|html|text|sexp) form.
Can take SQL query result-set and generate XML just fine. Am attempting to
push this through sqlxml2{html,text}.xsl in order to generate the html and
plaintext forms.
For HTML, I want embedded ^M characters in the "body" section of each
article to appear as <br> tags. For plaintext, I want to "body" section
refilled at <80 characters.

Some investigation shows that "the" way to replace ^M by <br> tags in XSL
is about 10 lines of templates to match before-part and after-part of the
field. Right. Let there be s///, let there be #gsub(), let there be
*anything* other than more spikey <> stuck in my face.

So far any properly programmed way would be better.

~Tim, a sort-of newbie to lispy things.
-- 
Move a mountain / Fill the ground           ·······@stirfried.vegetable.org.uk
Take death on wheels / Re-create the land   |http://spodzone.org.uk/
From: thelifter
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <b295356a.0212221337.70f9664b@posting.google.com>
···@ashi.footprints.net (Kaz Kylheku) wrote in message news:<····························@posting.google.com>...
> ·········@gmx.net (thelifter) wrote in message news:<····························@posting.google.com>...
> > I don't understand your criticism of XML. Basically XML is just
> > another way of writing S-expr or Trees or whatever you want to call
> 
> Lisp expressions are richly typed. How do you write a bitvector in
> XML? A complex number? A rational number? How do you distinguish
> between a symbol and a string? What about circularity? How do you
> write the following in XML?
> 
>    #1=(a b #*0111(d e #1#) #c(2 3/2))
> 
> In XML, if I have <foo>1234</foo>, what is 1234? Is it a character
> string of four digits? Or is it an integer? Where are the semantics?

Often it is clear from the context what type it is. Often it doesn't
matter, you can just treat everything as a string. And if you want to
introduce type, nothing stops you. Example:

<foo type="integer">1234</foo>

> 
> > it. It's more verbose, ok, but I wouldn't consider this to be so a
> > great disadvantage.
> 
> It is not only more verbose, but substantially harder to parse.

Ok, it's harder to parse. But this doesn't mean that it is a pile of
crap.

 
> > For example how would you save a Text document from
> > a wordprocessor with layout informations if you didn't use XML(or
> > something equivalent)?
> 
> I would use Lisp forms.

Yes, but as I already said, you can convert any Lisp form to XML and
vice-versa.

Basically all you said is that XML is harder to parse. Ok, but it
isn't a pile of crap. I think it would be more fair to say:

XML is a harder to parse S-exp, so it generates more overhead. That's
the disadvantage of using it. Otherwise it is ok.
From: Oleg
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <au5eja$et1$1@newsmaster.cc.columbia.edu>
thelifter wrote:

[...]

> XML is a harder to parse S-exp, so it generates more overhead. That's
> the disadvantage of using it. Otherwise it is ok.

What are your thoughts, if any, on the circularity issue?

Cheers
Oleg
From: thelifter
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <b295356a.0212221939.72f1df9a@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...
> thelifter wrote:
> 
> [...]
> 
> > XML is a harder to parse S-exp, so it generates more overhead. That's
> > the disadvantage of using it. Otherwise it is ok.
> 
> What are your thoughts, if any, on the circularity issue?

how about:

#1 = <foo>
       <bar>
          123
       </bar>
       <pointer>
          #1
       </pointer>
     </foo>
From: Erik Naggum
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <3249606458575514@naggum.no>
* ·········@gmx.net (thelifter)
| how about:

  ... grasping the purpose of ID and IDREF attributes?

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Henrik Motakef
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <87smwp4m4d.fsf@pokey.henrik-motakef.de>
Oleg <············@myrealbox.com> writes:

> > XML is a harder to parse S-exp, so it generates more overhead. That's
> > the disadvantage of using it. Otherwise it is ok.
> 
> What are your thoughts, if any, on the circularity issue?


<!DOCTYPE circularity-demo [
  <!ELEMENT circularity-demo (cons+)>
  <!ELEMENT cons (car, cdr)>
  <!ATTLIST cons id ID #REQUIRED>
  <!ELEMENT car (#PCDATA)>
  <!ELEMENT cdr EMPTY>
  <!ATTLIST cdr ref IDREF #REQUIRED>
]>
<circularity-demo>
  <cons id="cons1">
    <car>foo</car>
    <cdr ref="cons2"/>
  </cons>
  
  <cons id="cons2">
    <car>bar</car>
    <cdr ref="cons3"/>
  </cons>

  <cons id="cons3">
    <car>baz</car>
    <cdr ref="cons1"/>
  </cons>
</circularity-demo>
From: Erik Naggum
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <3249599069437733@naggum.no>
* ·········@gmx.net (thelifter)
| Yes, but as I already said, you can convert any Lisp form to XML
| and vice-versa.

  I thought we had utterly destroyed this stupid argument when it is
  phrased in terms of Turing Equivalence.

| Basically all you said is that XML is harder to parse.  Ok, but it
| isn't a pile of crap. I think it would be more fair to say:

  Why are you insiting so much on having other people approve of your
  personal opinion?

| XML is a harder to parse S-exp, so it generates more overhead.
| That's the disadvantage of using it.  Otherwise it is ok.

  I think you need to realize that just because you have a personal
  need to feel that you do not engage in something idiotic and evil
  does not mean that it any less idiotic and evil.

  I think I may have to answer your previous question thoroughly.  If
  you are not willing to accept that some people think that XML is
  the worst piece of shit to hit the computer fan in /ages/, that is
  OK, but that does not make those people go away, nor their arguments
  that uninventing XML would be about as beneficial to the world as
  unelecting George W. Bush.  XML is in my view the result of idiots
  who take something good and destroy it by taking it too far.  But
  more on that in another article.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Espen Vestre
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <kwu1h5uo12.fsf@merced.netfonds.no>
Erik Naggum <····@naggum.no> writes:

>   I thought we had utterly destroyed this stupid argument when it is
>   phrased in terms of Turing Equivalence.

Agreed. I think we need a collary of Godwin's Law here.
-- 
  (espen)
From: Thomas F. Burdick
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <xcvisxkgz78.fsf@famine.OCF.Berkeley.EDU>
Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:

> Erik Naggum <····@naggum.no> writes:
> 
> >   I thought we had utterly destroyed this stupid argument when it is
> >   phrased in terms of Turing Equivalence.
> 
> Agreed. I think we need a collary of Godwin's Law here.

Why bother?  It's well known that any usenet law can be simulated by
any other, with only a polynomial increase in time.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Kaz Kylheku
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <cf333042.0212230758.50b86c0c@posting.google.com>
·········@gmx.net (thelifter) wrote in message news:<····························@posting.google.com>...
> ···@ashi.footprints.net (Kaz Kylheku) wrote in message news:<····························@posting.google.com>...
> > ·········@gmx.net (thelifter) wrote in message news:<····························@posting.google.com>...
> > > I don't understand your criticism of XML. Basically XML is just
> > > another way of writing S-expr or Trees or whatever you want to call
> > 
> > Lisp expressions are richly typed. How do you write a bitvector in
> > XML? A complex number? A rational number? How do you distinguish
> > between a symbol and a string? What about circularity? How do you
> > write the following in XML?
> > 
> >    #1=(a b #*0111(d e #1#) #c(2 3/2))
> > 
> > In XML, if I have <foo>1234</foo>, what is 1234? Is it a character
> > string of four digits? Or is it an integer? Where are the semantics?
> 
> Often it is clear from the context what type it is.

Note that ``often it is'' means ``sometimes it is not''.

Context is that information which is not present in the written
representation; it's an implicit, shared understanding between the two
communicating parties. As such, it's a catch-all bag; given enough
context, you can derive arbitrary amounts of information from a single
binary digit.

One problem is that context requires extra processing. If the notation
hands me a character string, and the context tells me that it's an
integer, I have to parse that character string to turn it into an
integer, which means verifying that it has the right lexical form.

Another problem is that there is no universal convention for
representing an integer, beyond a simple string of decimal digits. Do
I handle a unary minus sign? In some conventions, the sign is part of
the integer token; in others, it's an extra symbol that behaves like a
unary operator. Can there be whitespace between the leading plus or
minus and the digits? What about using bases other than ten? Do we use
a prefix? Which one? 0x or #x for hex? Do we pardon and ignore
trailing non-digit garbage?

Many programmers will unfortunately handle this case by delegating to
the string-integer conversion that is built into their programming
language. And so now you have dragged the semantics of a programming
language into your XML representation.

> Often it doesn't matter, you can just treat everything as a string.

Often, that creates duplicity of process, overheads and hard to find
defects. If data is not uniformly and correctly parsed into a properly
typed data structure immediately on entry into the running software,
there is no telling what will happen to it.

> And if you want to introduce type, nothing stops you. Example:

I don't *want* to introduce type. Type was introduced in the 1950's by
my computing predecessors. In this new millennium, I just want to
*use* type.

> <foo type="integer">1234</foo>

I'd rather just write 1234, and be universally understood.

> > > it. It's more verbose, ok, but I wouldn't consider this to be so a
> > > great disadvantage.
> > 
> > It is not only more verbose, but substantially harder to parse.
> 
> Ok, it's harder to parse. But this doesn't mean that it is a pile of
> crap.

Sure it is. Something that lacks important virtues that are needed for
excellence in its category is a pile of crap.

Anyone designing a printed data representation should have easy and
efficient scanability as a major goal. Another goal should be a
standard representation of types. Users of the notation should not
have to agree on additional conventions, and use a whole lot of extra
syntax, just to distinguish strings, symbols and integers.

> > > For example how would you save a Text document from
> > > a wordprocessor with layout informations if you didn't use XML(or
> > > something equivalent)?
> > 
> > I would use Lisp forms.
> 
> Yes, but as I already said, you can convert any Lisp form to XML and
> vice-versa.

You failed to show how my example #1=(a b #*0111(d e #1#) #c(2 3/2))
can be converted to XML and then back without loss of information.

That example has one virtue, by the way: it is universally understood
to encode a certain abstract data structure. Even if you encode it
into XML, with type information and all, an additional document will
be required which will describe the semantics of all the extra tags
and attributes you have to invent. That extra context will be needed
to convert it to Lisp. The need for that context means that it's not
really XML that did the the representing.

> Basically all you said is that XML is harder to parse.

No, that is merely all you heard.
From: Tim Bradshaw
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ey3u1gyxntm.fsf@cley.com>
* thelifter  wrote:
> ···@ashi.footprints.net (Kaz Kylheku) wrote in message news:<····························@posting.google.com>...
 
>> In XML, if I have <foo>1234</foo>, what is 1234? Is it a character
>> string of four digits? Or is it an integer? Where are the semantics?

> Often it is clear from the context what type it is. Often it doesn't
> matter, you can just treat everything as a string. And if you want to
> introduce type, nothing stops you. Example:

> <foo type="integer">1234</foo>

So now I have to write my own little string->integer routine, and my
own little string->float routine, and my own little string->x routine
for every type x that I want to support that isn't a string.  You have
no idea the joy I feel at having to write all these little parsers,
especially the float one, which I have about 1% chance of getting
right.

What was the problem XML was meant to solve, again?  Something to do
with not having every application have its own little parser for data
interchange, wasn't it?  But they didn't bother to provide syntax for
numbers, a type of data that programs do quote often need to
interchange, oh no, because that would have actually involved solving
the problem rather than providing employment for a few thousand CS
graduates writing yet another buggy, incomplete float parser.  Well, I
hate to tell you, but the high-tech bubble has pretty much burst and
it might be about time that people designing data interchange formats
actually designed them, rather than solving some trivial subset
problem, badly: paying for non-working systems is no longer
fashionable.

--tim
From: Henrik Motakef
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <87n0mq5db2.fsf@pokey.henrik-motakef.de>
Tim Bradshaw <···@cley.com> writes:

> > <foo type="integer">1234</foo>
> 
> So now I have to write my own little string->integer routine, and my
> own little string->float routine, and my own little string->x routine
> for every type x that I want to support that isn't a string.  You have
> no idea the joy I feel at having to write all these little parsers,
> especially the float one, which I have about 1% chance of getting
> right.

Then use <foo xsi:type="xsi:integer"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">1234</foo> or
declare the content of foo to be of type xsi:integer in your schema,
and use a library that gives you access to the PSVI (AFAIK Apache's
Xerces should do).

Oh, you were looking for something that doesn't introduce more
problems than it solves? Never mind. But it rocks if your documents
contain lots of strangely formatted dates, times and durations ;-)

Regards
Henrik
From: Tim Bradshaw
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ey3fzsh6hlp.fsf@cley.com>
* Henrik Motakef wrote:

> Oh, you were looking for something that doesn't introduce more
> problems than it solves? Never mind. But it rocks if your documents
> contain lots of strangely formatted dates, times and durations ;-)

Yes.  I was looking for something that gives me as good or
(preferably) better tradeoffs of utility against complexity than READ
does.  Better than READ would be easy - have some (not too complex)
way of stopping automatic symbol interning for instance - but what XML
gives me is so much worse you need a logarithmic scale to even *see*
it.

--tim
From: Ray Blaak
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ubs331ava.fsf@telus.net>
Henrik Motakef <··············@web.de> writes:
> Tim Bradshaw <···@cley.com> writes:
> > > <foo type="integer">1234</foo>
> 
> Then use <foo xsi:type="xsi:integer"
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">1234</foo> or
> declare the content of foo to be of type xsi:integer in your schema,
> and use a library that gives you access to the PSVI (AFAIK Apache's
> Xerces should do).

This works, but it sucks, and it sucks because it is a duplication of work and
it is tedious to use.

The application that ultimately reads this data necessarily must validate and
parse it on its own, simply in the name of defensive programming.

With the Schema/DTD thing, one describes the data content twice: once in the
application's implementation, and once in the schema.

The only time I see a use for Schema/DTDs are when one has the standard that
is implemented by multiple people/vendors (e.g. an EJB deployment descriptor).
In that case though, the grammars are really serving as a formal description
on the file content.

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Ray Blaak
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <uznr0sko5.fsf@telus.net>
·········@gmx.net (thelifter) writes:
> Erik Naggum <····@naggum.no> wrote in message news:<················@naggum.no>...
> >   XML is everything that is bad about using
> >   SGML in the wrong places for the wrong purposes.  It is, in short,
> >   the worst braindamage that has hit the computing world in a very
> >   long time.
> 
> I don't understand your criticism of XML.

Google previous Erik/XML threads in this group. He lays it out pretty clearly.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Erik Naggum
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <3250033069468718@naggum.no>
* ·········@gmx.net (thelifter)
| I don't understand your criticism of XML.

  I sometimes regret that human memory is such a great tool for one's
  personal life that coming to rely on the wider context it provides
  in one's communication with others is so fragile.  I have explained
  this dozens of times, but I guess each repetition adds something.

| Basically XML is just another way of writing S-expr or Trees or
| whatever you want to call it.

  They are not identical.  The aspects you are willing to ignore are
  more important than the aspects you are willing to accept.  Robbery
  is not just another way of making a living, rape is not just another
  way of satisfying basic human needs, torture is not just another way
  of interrogation.  And XML is not just another way of writing S-exps.
  There are some things in life that you do not do if you want to be a
  moral being and feel proud of what you have accomplished.

  SGML was a major improvement on the markup languages that preceded
  it (including GML), which helped create better publishing systems
  and helped people think about information in much improved ways, but
  when the zealots forgot the publishing heritage and took the notion
  that information can be separated from presentation out of the world
  of publishing into general data representation because SGML had had
  some success in "database publishing", something went awry, not only
  superficially, but fundamentally.  It is not unlike when a baby,
  whose mother satisfies its every need before it is even aware that
  it has been expressed, grows up to believe that the world in general
  is both influenced by and obliged to satisfy its whims.  Even though
  nobody in their right mind would argue that babies should fend for
  themselves and earn their own living, at some point in the child's
  life, it must begin a progression towards independence, which is not
  merely a quantitative difference from having every need satisfied by
  crying, but a qualitative difference of enormous consequence.  Many
  an idea or concept not only looks, but /is/ good in its infancy, yet
  turns destructive later in life.  Scaling and maturation are not the
  obvious processes they appear to be because they take so much time
  that the accumulated effort is easy to overlook.  To be successful,
  they must also be very carefully guided by people who can envision
  the end result, but that makes it appear to many as if it merely
  "happens".  Take a good idea out of its infancy, let it age without
  guidance so it does not mature, and it generally goes bad.  If GML
  was an infant, SGML is the bright youngster far exceeds expectations
  and made its parents too proud, but XML is the drug-addicted gang
  member who had committed his first murder before he had sex, which
  was rape.

  SGML is a good idea when the markup overhead is less than 2%.  Even
  attributes is a good idea when the textual element contents is the
  "real meat" of the document and attributes only aid processing, so
  that the printed version of a fully marked-up document has the same
  characters as the document sans tags.  Explicit end-tags is a good
  idea when the distance between start- and end-tag is more than the
  20-line terminal the document is typed on.  Minimization is a good
  idea in an already sparsely tagged document, both because tags are
  hard to keep track of and because clusters of tags are so intrusive.
  Character entities is a good idea when your entire character set is
  EBCDIC or ASCII.  Validating the input prior to processing is a good
  idea when processing would take minutes, if not hours, and consume
  costly resources, only to abend.  SGML had an important potential in
  its ability to let the information survive changes in processing
  equipment or software where its predecessors clearly failed.  But,
  to continue the baby metaphor, you have to go into fetishism to keep
  using diapers as you age but fail to mature. (I note in passing that
  the stereotypical American male longs for much larger than natural
  female breasts, presumably to maintain the proportion to his own
  size from his infancy, which has caused the stereotypical American
  female to feel a need for breasts that will give the next generation
  a demand for even more disproportionally large breasts.)  When the
  markup overhead exceeds 200%, when attributes values and element
  contents compete for the information, when the distance between 99%
  of the "tags" is /zero/, when the character set is Unicode, and when
  validation takes more time than processing, not to mention the sorry
  fact that information longevity is more /threatened/ by XML than by
  any other data representation in the history of computing, then SGML
  has gone from good kid, via bad teenager, to malfunctioning, evil
  adult as XML.  SGML was in many ways smarter than necessary at the
  time it was a bright idea, it was evidence of too much intelligence
  applied to the problems it solved.  A problem mankind has not often
  had to deal with is that of excessive intelligence; more often than
  not, technological solutions are barely intelligent enough to solve
  the problem at hand.  If a solution is much smarter than the problem
  and really stupid people notice it, they believe they have got their
  hands on something /great/, and so they destroy it, not unlike how
  giving stupid people too much power can threaten world peace and
  unravel legal concepts like due process and presumption of innocence.

  I once believed that it would be very beneficial for our long-term
  information needs to adorn the text with as much meta-information as
  possible.  I still believe that the world would be far better off if
  it had evolved standardized syntactic notations for time, location,
  proper names, language, etc, and that even prose text would be
  written in such a way that precision in these matters would not be
  sacrificed, but most people are so obsessively concerned with their
  immediate personal needs that anything that could be beneficial on a
  much larger scale have no chance of surviving.  Look at the United
  States of America, with its depressingly moronic units instead of
  going metric, with its inability to write dates in either ascending
  or descending order of unit size, and with its insistence upon the
  12-hour clock, clearly evidencing the importance of the short-term
  pain threshold and resistance to doing anyone else's bidding.  And
  now the one-time freest nation of the world has turned dictatorship
  with a dangerous moron in charge, set to attack Iraq to revenge his
  father's loss.  Those who laughed when I said that stupidity is the
  worst threat to mankind laugh no more; they wait with bated breath
  to see if the world's most powerful incoherent moron will launch the
  world into a world war simply because he is too fucking stupid.  But
  what really pisses me off is the spineless American people who fails
  to stop this madness.  Presidents have been shot and killed before.
  I seem to be digressing -- the focal point is that the masses, those
  who exert no effort to better themselves, cannot be expected to help
  solve any problems larger than their own, and so they must be forced
  by various means, such as compulsory education, spelling checkers,
  newspaper editors who do /not/ publish their letters to the editor,
  and not least by the courts that restrain the will to revenge, in
  order to keep a modicum of sanity in the frail structure that is
  human society.  We are clearly not at the stage of human development
  where writers are willing to accept the burden of communicating to
  the machine what they are thinking.  One has to marvel at the wide
  acceptance of our existing punctuation marks and the sociology of
  their acceptance.  "Tagging" text for semantic constructs that the
  human mind is able to discern from context must be millennia off.

  In many ways, the current American presidency and XML have much in
  common.  Both have clear lineages back to very intelligent people.
  Both demonstrate what happens when you give retards the tools of the
  intelligent.  Some Americans obsess over gun control, to limit the
  number of handguns in the hands of their civilians, but support the
  most out-of-control nutcase in the young history of the nation and
  rally behind his world-threatening abuse of guns.  The once noble
  concern over validation to curb excessive costs of too powerful a
  tool for the people who used it, has turned into an equally insane
  abuse of power in the XML world.  How could such staggering idiots
  as have become "leaders" of the XML world and the free world come to
  their power?  Clearly, they gain support from the masses who have no
  concerns but their immediate needs, no ability to look for long-term
  solutions and stability, no desire to think further ahead than that
  each individual decision they make be the best for them.  Lethargy
  and pessimism, lack of long-term goals, apathy towards consequences,
  they are all symptoms of depressed people, and it is perhaps no
  coincidence that the world economy is now in a depression.  My take
  on it is that it is because too much growth also rewarded people of
  such miniscule intellectual prowess that they turned to fraud rather
  than tackle the coming negative trends intelligently.  Whether Enron
  or W3C or the GOP, everyone knows that fraud does pay in the short
  term and that bad money drives out good.  When even the staggering
  morons are rewarded, the honest and intelligent must lose, and even
  the best character will have a problem when being honest means that
  he forfeits a chance to received a hundred million dollars.  In both
  the Bush administration and the W3C standards administration, we see
  evidence that large groups of people did not believe that it would
  matter who assumed power.  I am quite certain that just as Bush is
  supposed to be a thoroughly /likable/ person, the people who work up
  the most demented "standards" in the W3C lack that personality trait
  that is both abrasive and exhibit leadership potential.  When the
  overall growth of something is so rapid that an idiotic decision no
  longer causes any immediate losses, the number of such decisions
  will grow without bounds until the losses materialize, such as in an
  economic depression.  When the losses are so diffused as to not even
  affect the idiots behind the decisions, they can stay in power for a
  very long time until they are blamed for a large number of ills they
  had no power to predict, but that is precisely what caused them.

| I use XML on a daily basis and think it is a simple and intelligent
| way to represent data.

  A comment on this statement is by now entirely superfluous.

| I would like to hear why you think it is so bad, can you be more
| specific please?

  If you really need more information, search the Net, please.

| And how would you improve on it?

  A brief summary, then: Remove the syntactic mess that is attributes.
  (You will then find that you do not need them at all.)  Enclose the
  /element/ in matching delimiters, not the tag.  These simple things
  makes people think differently about how they use the language.
  Contrary to the foolish notion that syntax is immaterial, people
  optimize the way they express themselves, and so express themselves
  differently with different syntaxes.  Next, introduce macros that
  look exactly like elements, but that are expanded in place between
  the reader and the "object model".  Then, remove the obnoxious
  character entities and escape special characters with a single
  character, like \, and name other entities with letters following
  the same character.  If you need a rich set of publishing symbols,
  discover Unicode.  Finally, introduce a language for micro-parsers
  than can take more convenient syntaxes for commonly used elements
  with complex structure and make them /return/ element structures
  more suitable for processing on the receiving end, and which would
  also make validation something useful.  The overly simple regular
  expression look-alike was a good idea when processing was expensive
  and made all decisions at the start-tag, but with a DOM and less
  stream-like processing, a much better language should be specified
  that could also do serious computation before validating a document
  -- so that once again processing could become cheaper because of the
  "markup", not more expensive because of it.

  But the one thing I would change the most from a markup language
  suitable for marking up the incidental instruction to a type-setter
  to the data representation language suitable for the "market" that
  XML wants, is to go for a binary representation.  The reasons for
  /not/ going binary when SGML competed with ODA have been reversed:
  When information should survive changes in the software, it was an
  important decision to make the data format verbose enough that it
  was easy to implement a processor for it and that processors could
  liberally accept what other processors conservatively produced, but
  now that the data formats that employ XML are so easily changed
  that the software can no longer keep up with it, we need to slam on
  the breaks and tell the redefiners to curb their enthusiasm, get it
  right before they share their experiments with the world, and show
  some respect for their users.  One way to do that is to increase the
  cost of changes to implementations without sacrificing readability
  and without making the data format more "brittle", by going binary.
  Our information infrastructure has become so much better that the
  nature of optimization for survivability has changed qualitatively.
  The question of what we humans need to read and write no longer has
  any bearing on what the computers need to work with.  One of the
  most heinous crimes against computing machinery is therefore to
  force them to parse XML when all they want is the binary data.  As
  an example, think of the Internet Protocol and Transmission Control
  Protocol in XML terms.  Implementors of SNMP regularly complained
  that parsing the ASN.1 encodings took a disproportionate amount of
  processing time, but they also acknowledged that properly done, it
  mapped directly to the values they needed to exchange.  Now, think
  of what would have happened had it not been a Simple, but instead
  some moronic excuse for an eXtensible Network Management Protocol.

  Another thing is that we have long had amazingly rich standards for
  such "display attributes" as many now use HTML and the like.  The
  choice to use SGML for web publication was not entirely braindead,
  but it should have been obvious from the outset that page display
  would become important, if not immediately, then after watching what
  people were trying to do with HTML.  The Web provided me with a much
  needed realization that information cannot be /fully/ separated from
  its presentation, and showed me something I knew without verbalizing
  explicitly, that the presentation form we choose communicates real
  information.  Encoding all of it via markup would require a very
  fine level of detail, not to mention /awareness/ of issues so widely
  dispersed in the population that only a handful of people per
  million grasp them.  Therefore, to be successful, there must be an
  upper limit to the complexity of the language defined with SGML, and
  one must go on to solve the next problem, not sit idle with a set of
  great tools and think "I ought to use these tools for something".
  Stultifying as the language of content models may be, it amazes me
  that people do not grasp that they need to use something else when
  it becomes too painful to express with SGML, but I am in the highly
  privileged position of knowing a lot more than SGML when I pronounce
  my judgment on XML.  For one thing, I knew Lisp before I saw SGML,
  so I know what brilliant minds can do under optimal conditions and
  when they ensure that the problem is still bigger than the solution.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Joe Marshall
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <4r8vmmh8.fsf@ccs.neu.edu>
Erik Naggum <····@naggum.no> writes:

>   I am quite certain that just as Bush is supposed to be a
>   thoroughly /likable/ person, the people who work up the most
>   demented "standards" in the W3C lack that personality trait that
>   is both abrasive and exhibit leadership potential.

We select our leaders over here by `popular' vote (more or less, there
is that electoral college, the supreme court, etc., etc. but you can
hardly say that Bush was *significantly* (order of magnitude) less
popular than Gore).

The US election system is optimized to favor good-looking, personable
people with decent on-camera rapport.  It works quite well in that
regard. 
From: Erik Naggum
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <3250033735497397@naggum.no>
* ·········@gmx.net (thelifter)
| I don't understand your criticism of XML.

  I sometimes regret that human memory is such a great tool for one's
  personal life that coming to rely on the wider context it provides
  in one's communication with others is so fragile.  I have explained
  this dozens of times, but I guess each repetition adds something.

| Basically XML is just another way of writing S-expr or Trees or
| whatever you want to call it.

  They are not identical.  The aspects you are willing to ignore are
  more important than the aspects you are willing to accept.  Robbery
  is not just another way of making a living, rape is not just another
  way of satisfying basic human needs, torture is not just another way
  of interrogation.  And XML is not just another way of writing S-exps.
  There are some things in life that you do not do if you want to be a
  moral being and feel proud of what you have accomplished.

  SGML was a major improvement on the markup languages that preceded
  it (including GML), which helped create better publishing systems
  and helped people think about information in much improved ways, but
  when the zealots forgot the publishing heritage and took the notion
  that information can be separated from presentation out of the world
  of publishing into general data representation because SGML had had
  some success in "database publishing", something went awry, not only
  superficially, but fundamentally.  It is not unlike when a baby,
  whose mother satisfies its every need before it is even aware that
  it has been expressed, grows up to believe that the world in general
  is both influenced by and obliged to satisfy its whims.  Even though
  nobody in their right mind would argue that babies should fend for
  themselves and earn their own living, at some point in the child's
  life, it must begin a progression towards independence, which is not
  merely a quantitative difference from having every need satisfied by
  crying, but a qualitative difference of enormous consequence.  Many
  an idea or concept not only looks, but /is/ good in its infancy, yet
  turns destructive later in life.  Scaling and maturation are not the
  obvious processes they appear to be because they take so much time
  that the accumulated effort is easy to overlook.  To be successful,
  they must also be very carefully guided by people who can envision
  the end result, but that makes it appear to many as if it merely
  "happens".  Take a good idea out of its infancy, let it age without
  guidance so it does not mature, and it generally goes bad.  If GML
  was an infant, SGML is the bright youngster far exceeds expectations
  and made its parents too proud, but XML is the drug-addicted gang
  member who had committed his first murder before he had sex, which
  was rape.

  SGML is a good idea when the markup overhead is less than 2%.  Even
  attributes is a good idea when the textual element contents is the
  "real meat" of the document and attributes only aid processing, so
  that the printed version of a fully marked-up document has the same
  characters as the document sans tags.  Explicit end-tags is a good
  idea when the distance between start- and end-tag is more than the
  20-line terminal the document is typed on.  Minimization is a good
  idea in an already sparsely tagged document, both because tags are
  hard to keep track of and because clusters of tags are so intrusive.
  Character entities is a good idea when your entire character set is
  EBCDIC or ASCII.  Validating the input prior to processing is a good
  idea when processing would take minutes, if not hours, and consume
  costly resources, only to abend.  SGML had an important potential in
  its ability to let the information survive changes in processing
  equipment or software where its predecessors clearly failed.  But,
  to continue the baby metaphor, you have to go into fetishism to keep
  using diapers as you age but fail to mature. (I note in passing that
  the stereotypical American male longs for much larger than natural
  female breasts, presumably to maintain the proportion to his own
  size from his infancy, which has caused the stereotypical American
  female to feel a need for breasts that will give the next generation
  a demand for even more disproportionally large breasts.)  When the
  markup overhead exceeds 200%, when attributes values and element
  contents compete for the information, when the distance between 99%
  of the "tags" is /zero/, when the character set is Unicode, and when
  validation takes more time than processing, not to mention the sorry
  fact that information longevity is more /threatened/ by XML than by
  any other data representation in the history of computing, then SGML
  has gone from good kid, via bad teenager, to malfunctioning, evil
  adult as XML.  SGML was in many ways smarter than necessary at the
  time it was a bright idea, it was evidence of too much intelligence
  applied to the problems it solved.  A problem mankind has not often
  had to deal with is that of excessive intelligence; more often than
  not, technological solutions are barely intelligent enough to solve
  the problem at hand.  If a solution is much smarter than the problem
  and really stupid people notice it, they believe they have got their
  hands on something /great/, and so they destroy it, not unlike how
  giving stupid people too much power can threaten world peace and
  unravel legal concepts like due process and presumption of innocence.

  I once believed that it would be very beneficial for our long-term
  information needs to adorn the text with as much meta-information as
  possible.  I still believe that the world would be far better off if
  it had evolved standardized syntactic notations for time, location,
  proper names, language, etc, and that even prose text would be
  written in such a way that precision in these matters would not be
  sacrificed, but most people are so obsessively concerned with their
  immediate personal needs that anything that could be beneficial on a
  much larger scale have no chance of surviving.  Look at the United
  States of America, with its depressingly moronic units instead of
  going metric, with its inability to write dates in either ascending
  or descending order of unit size, and with its insistence upon the
  12-hour clock, clearly evidencing the importance of the short-term
  pain threshold and resistance to doing anyone else's bidding.  And
  now the one-time freest nation of the world has turned dictatorship
  with a dangerous moron in charge, set to attack Iraq to revenge his
  father's loss.  Those who laughed when I said that stupidity is the
  worst threat to mankind laugh no more; they wait with bated breath
  to see if the world's most powerful incoherent moron will launch the
  world into a world war simply because he is too fucking stupid.  But
  what really pisses me off is the spineless American people who fails
  to stop this madness.  Presidents have been shot and killed before.
  I seem to be digressing -- the focal point is that the masses, those
  who exert no effort to better themselves, cannot be expected to help
  solve any problems larger than their own, and so they must be forced
  by various means, such as compulsory education, spelling checkers,
  newspaper editors who do /not/ publish their letters to the editor,
  and not least by the courts that restrain the will to revenge, in
  order to keep a modicum of sanity in the frail structure that is
  human society.  We are clearly not at the stage of human development
  where writers are willing to accept the burden of communicating to
  the machine what they are thinking.  One has to marvel at the wide
  acceptance of our existing punctuation marks and the sociology of
  their acceptance.  "Tagging" text for semantic constructs that the
  human mind is able to discern from context must be millennia off.

  In many ways, the current American presidency and XML have much in
  common.  Both have clear lineages back to very intelligent people.
  Both demonstrate what happens when you give retards the tools of the
  intelligent.  Some Americans obsess over gun control, to limit the
  number of handguns in the hands of their civilians, but support the
  most out-of-control nutcase in the young history of the nation and
  rally behind his world-threatening abuse of guns.  The once noble
  concern over validation to curb excessive costs of too powerful a
  tool for the people who used it, has turned into an equally insane
  abuse of power in the XML world.  How could such staggering idiots
  as have become "leaders" of the XML world and the free world come to
  their power?  Clearly, they gain support from the masses who have no
  concerns but their immediate needs, no ability to look for long-term
  solutions and stability, no desire to think further ahead than that
  each individual decision they make be the best for them.  Lethargy
  and pessimism, lack of long-term goals, apathy towards consequences,
  they are all symptoms of depressed people, and it is perhaps no
  coincidence that the world economy is now in a depression.  My take
  on it is that it is because too much growth also rewarded people of
  such miniscule intellectual prowess that they turned to fraud rather
  than tackle the coming negative trends intelligently.  Whether Enron
  or W3C or the GOP, everyone knows that fraud does pay in the short
  term and that bad money drives out good.  When even the staggering
  morons are rewarded, the honest and intelligent must lose, and even
  the best character will have a problem when being honest means that
  he forfeits a chance to received a hundred million dollars.  In both
  the Bush administration and the W3C standards administration, we see
  evidence that large groups of people did not believe that it would
  matter who assumed power.  I am quite certain that just as Bush is
  supposed to be a thoroughly /likable/ person, the people who work up
  the most demented "standards" in the W3C lack that personality trait
  that is both abrasive and exhibit leadership potential.  When the
  overall growth of something is so rapid that an idiotic decision no
  longer causes any immediate losses, the number of such decisions
  will grow without bounds until the losses materialize, such as in an
  economic depression.  When the losses are so diffused as to not even
  affect the idiots behind the decisions, they can stay in power for a
  very long time until they are blamed for a large number of ills they
  had no power to predict, but that is precisely what caused them.

| I use XML on a daily basis and think it is a simple and intelligent
| way to represent data.

  A comment on this statement is by now entirely superfluous.

| I would like to hear why you think it is so bad, can you be more
| specific please?

  If you really need more information, search the Net, please.

| And how would you improve on it?

  A brief summary, then: Remove the syntactic mess that is attributes.
  (You will then find that you do not need them at all.)  Enclose the
  /element/ in matching delimiters, not the tag.  These simple things
  makes people think differently about how they use the language.
  Contrary to the foolish notion that syntax is immaterial, people
  optimize the way they express themselves, and so express themselves
  differently with different syntaxes.  Next, introduce macros that
  look exactly like elements, but that are expanded in place between
  the reader and the "object model".  Then, remove the obnoxious
  character entities and escape special characters with a single
  character, like \, and name other entities with letters following
  the same character.  If you need a rich set of publishing symbols,
  discover Unicode.  Finally, introduce a language for micro-parsers
  than can take more convenient syntaxes for commonly used elements
  with complex structure and make them /return/ element structures
  more suitable for processing on the receiving end, and which would
  also make validation something useful.  The overly simple regular
  expression look-alike was a good idea when processing was expensive
  and made all decisions at the start-tag, but with a DOM and less
  stream-like processing, a much better language should be specified
  that could also do serious computation before validating a document
  -- so that once again processing could become cheaper because of the
  "markup", not more expensive because of it.

  But the one thing I would change the most from a markup language
  suitable for marking up the incidental instruction to a type-setter
  to the data representation language suitable for the "market" that
  XML wants, is to go for a binary representation.  The reasons for
  /not/ going binary when SGML competed with ODA have been reversed:
  When information should survive changes in the software, it was an
  important decision to make the data format verbose enough that it
  was easy to implement a processor for it and that processors could
  liberally accept what other processors conservatively produced, but
  now that the data formats that employ XML are so easily changed
  that the software can no longer keep up with it, we need to slam on
  the breaks and tell the redefiners to curb their enthusiasm, get it
  right before they share their experiments with the world, and show
  some respect for their users.  One way to do that is to increase the
  cost of changes to implementations without sacrificing readability
  and without making the data format more "brittle", by going binary.
  Our information infrastructure has become so much better that the
  nature of optimization for survivability has changed qualitatively.
  The question of what we humans need to read and write no longer has
  any bearing on what the computers need to work with.  One of the
  most heinous crimes against computing machinery is therefore to
  force them to parse XML when all they want is the binary data.  As
  an example, think of the Internet Protocol and Transmission Control
  Protocol in XML terms.  Implementors of SNMP regularly complained
  that parsing the ASN.1 encodings took a disproportionate amount of
  processing time, but they also acknowledged that properly done, it
  mapped directly to the values they needed to exchange.  Now, think
  of what would have happened had it not been a Simple, but instead
  some moronic excuse for an eXtensible Network Management Protocol.

  Another thing is that we have long had amazingly rich standards for
  such "display attributes" as many now use HTML and the like.  The
  choice to use SGML for web publication was not entirely braindead,
  but it should have been obvious from the outset that page display
  would become important, if not immediately, then after watching what
  people were trying to do with HTML.  The Web provided me with a much
  needed realization that information cannot be /fully/ separated from
  its presentation, and showed me something I knew without verbalizing
  explicitly, that the presentation form we choose communicates real
  information.  Encoding all of it via markup would require a very
  fine level of detail, not to mention /awareness/ of issues so widely
  dispersed in the population that only a handful of people per
  million grasp them.  Therefore, to be successful, there must be an
  upper limit to the complexity of the language defined with SGML, and
  one must go on to solve the next problem, not sit idle with a set of
  great tools and think "I ought to use these tools for something".
  Stultifying as the language of content models may be, it amazes me
  that people do not grasp that they need to use something else when
  it becomes too painful to express with SGML, but I am in the highly
  privileged position of knowing a lot more than SGML when I pronounce
  my judgment on XML.  For one thing, I knew Lisp before I saw SGML,
  so I know what brilliant minds can do under optimal conditions and
  when they ensure that the problem is still bigger than the solution.

-- 
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
From: Kaz Kylheku
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <cf333042.0212190934.c5452bf@posting.google.com>
Oleg <············@myrealbox.com> wrote in message news:<············@newsmaster.cc.columbia.edu>...
> Paolo Amoroso wrote:
> 
> > 
> >> 4. XML is dumped in favor of sexprs
> > 
> > Make sexp not war!
> 
> \documentclass[letterpaper]{article}
> \bibliographystyle{unsrt}
> 
> \begin{document}
> \begin{S_exp_war}
> Lispers like to flame XML. Now, what is wrong with XML that
> is {\em not} "wrong" with \LaTeX and HTML?
> \end{S_exp_war}
> \end{document}

As a language, TeX is an complete pile of crap. People use it,
especially in academia, because there is no other way to get equally
good looking documents. Don't assume that because some Lispers use
LaTeX, they endorse it as a language.

Knuth is one of those people who can hack some arcane garbage into
producing great output that is the result of sophisticated
computation. He's the ultimate Real Programmer.

Nothing epitomizes Knuth better than the obtuse instruction set
architecture he designed for the programming examples and exercises in
TAOCP. Bizarre numeric representations, strange encodings, odd
limitations. Blech. (Rhymes with TeX).
From: wni
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <WSsO9.339144$GR5.105327@rwcrnsc51.ops.asp.att.net>
Kaz Kylheku wrote:

> As a language, TeX is an complete pile of crap. People use it,
> especially in academia, because there is no other way to get equally
> good looking documents. Don't assume that because some Lispers use
> LaTeX, they endorse it as a language.

In what way is it a pile of crap? The so-called "mistake" people claimed
that TeX made was the static/dynamic scoping of macros. However,
I have yet to see how he Algol-like scoping works in such a typesetting
language.

TeX doesn't need the endorsement of Lispers. It's used by
mathematicians, physicists, and theoretical computer scientists.

> 
> Knuth is one of those people who can hack some arcane garbage into
> producing great output that is the result of sophisticated
> computation. He's the ultimate Real Programmer.

I am not sure about the claim about the "arcane garage" part. Whether
Knuth is *the* ultimate real programmer is very much disputable.

> 
> Nothing epitomizes Knuth better than the obtuse instruction set
> architecture he designed for the programming examples and exercises in
> TAOCP. Bizarre numeric representations, strange encodings, odd
> limitations. Blech. (Rhymes with TeX).

Did you ever read how Knuth explain the motivation and the intended
improvement on the MIX language? Despite the fact that many people
don't like it, it doesn't stop you from skipping all the MIX-coding
stuff to enjoy the books. It's only when you are reading part of
the volumne two that you need some of MIX code.


wni at attbi dot com
From: Tim Bradshaw
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ey3bs356hf8.fsf@cley.com>
* wni  wrote:

> In what way is it a pile of crap? The so-called "mistake" people claimed
> that TeX made was the static/dynamic scoping of macros. However,
> I have yet to see how he Algol-like scoping works in such a typesetting
> language.

Try writing complex TeX macros some time.  It is *so* much harder than
it needs to be.  Then try and write a set of complex macros that
doesn't give you weird contextual issues (oh, you can't use that macro
in headings, you have to use this other one that does the same thing
except it works in headings).  These problems don't have to exist, but
they are very hard to avoid in TeX.

--tim 
From: Joe Marshall
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <y967l7qq.fsf@ccs.neu.edu>
Tim Bradshaw <···@cley.com> writes:

> * wni  wrote:
> 
> > In what way is it a pile of crap? The so-called "mistake" people claimed
> > that TeX made was the static/dynamic scoping of macros. However,
> > I have yet to see how he Algol-like scoping works in such a typesetting
> > language.
> 
> Try writing complex TeX macros some time.  

Try writing *simple* TeX macros that a) work in all contexts, or 
b) compose predictably.
From: Harald Hanche-Olsen
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <pcod6njcmll.fsf@thoth.math.ntnu.no>
+ Joe Marshall <···@ccs.neu.edu>:

| Tim Bradshaw <···@cley.com> writes:
| 
| > Try writing complex TeX macros some time.  
| 
| Try writing *simple* TeX macros that a) work in all contexts, or 
| b) compose predictably.

No problem, so long as you can do it entirely within the macro
processor (aka TeX's mouth).  In fact, you can do a decent bit of
functional programming there.  (Alan Jeffrey: Lists in TeX's Mouth,
Tugboat vol 11 #2 (1990), 237-245.)

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Yes it works in practice - but does it work in theory?
From: Dorai Sitaram
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <aups78$c3g$1@news.gte.com>
In article <···············@cley.com>, Tim Bradshaw  <···@cley.com> wrote:
>* wni  wrote:
>
>> In what way is it a pile of crap? The so-called "mistake" people claimed
>> that TeX made was the static/dynamic scoping of macros. However,
>> I have yet to see how he Algol-like scoping works in such a typesetting
>> language.
>
>Try writing complex TeX macros some time.  It is *so* much harder than
>it needs to be.  Then try and write a set of complex macros that
>doesn't give you weird contextual issues (oh, you can't use that macro
>in headings, you have to use this other one that does the same thing
>except it works in headings).  These problems don't have to exist, but
>they are very hard to avoid in TeX.

These problems are really the problems of LaTeX, which
raises expectations about composability and
first-class-ness that the raw TeX primitives themselves
do not, and that LaTeX ultimately fails to satisfy.
Eg, and I think Kaz mentioned this, in LaTeX you can't
call \verb inside a \footnote or a \section, although
even an intermediate user cannot readily recognize why
this should be.  For a person coming to terms with the
TeX language based on itself, these problems are
peculiar library problems.  He would read the TeXbook,
and then would write \footnote and \section not as
functions, but as things that open a group, so he
doesn't have the situation of having an
already-read argument with already-set catcodes. 

This is not to say that TeX has no problems (I will
come to that, but even so, it doesn't come close to
satisfying any conditions to be called "a pile of
crap").  But you have to approach it on its own terms
and with humility, instead of "Hey, this is not like my
beloved ANSI Common Lisp."  A \footnote that interacts
well with a potential \verb is then easy and only
slightly tedious to write, but only one adept has to do
it (because TeX is not a general-purpose or classroom
language that everyone has to excel in in order to get
good grades or promotions).  Actually nobody has
to do it, because a good \footnote has already been
done in plain.tex.  I don't understand why LaTeX didn't
use the same approach for its own \footnote. 

\section is a wee bit tougher.  A \section title is
used twice, once in the text, and once in a potential
ToC.  So you have to \write it out to an aux file in
addition to typesetting it in-place.  These require two
different catcode assignments for the characters in the
title.  One could solve this problem by writing the
title to a temporary location, and then reading it with
different catcodes as the need arises.  String ports
would have solved this neatly, but temp files are
good enough, and indeed I used something like that for
my own documentation needs, where I embed \verb's in
section titles often enough. 

This is tedious stuff for people who just want things
to work already, but not insurmountably so, and it is
not going to satisfy someone expecting Lisp-like
composability and higher-order functions and macros etc
(but why is he wanting all that if he wants things
working already?  Is he even someone who can ever be
satisfied?).  And why should TeX be like Lisp?  TeX
reads characters one a time, it doesn't read structured
data, like Lisp's Read does.  If it did the latter, the
amount of data it would have to slurp at each of its
reads can be very large, given idiosyncratic documents
(already Thomas-Mann-like paragraphs can rattle TeX,
because TeX does some whole-paragraph analysis).  TeX's
goals are different -- excellent typographic quality
with good reliability and without insanely anal markup
-- and it solves them admirably.  

The flaws of TeX are not really that it is not
Lisplike, but that it is mired in a depression-era
approach to computational resources.  There are only so
many registers available for use as counters, dimens,
skips, input streams, output streams, what have you.
Once you accept that it is a matter of manipulating
these registers, and not about doing Lisp programming,
the only problem that remains is having enough of these
registers.  This TeX does not, and for no good reason
other than age.  It should be very doable to write a
modern TeX that does not have these limitations,
or at least move their numbers from 2**8 (or 2**4, for
streams) to the slightly less impoverished 2**16
ballpark -- so doable, in fact, that it has already
been done more than a decade ago.  This would be eTeX,
although I'm not 100% sure that they addressed the
problem of the fewness of input and output streams,
because fixing that would let the user use
string-port-like mechanisms freely without worrying
about running out of resources.  You continue to have
the option of doing expressive programming outside of
TeX (defining an \evalFollowingSexpInLisp in TeX is
very easy), but are now assured that the TeX you
generate will not die for want of registers.
From: Tim Bradshaw
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ey3n0mn1h2b.fsf@cley.com>
* Dorai Sitaram wrote:

> These problems are really the problems of LaTeX, which
> raises expectations about composability and
> first-class-ness that the raw TeX primitives themselves
> do not, and that LaTeX ultimately fails to satisfy.

No, I think the problems of writing non-trivial sets of macros in TeX
(of which LaTeX is just one) are problems of TeX, not LaTeX.  I've
done such things (in plain TeX, as well as in LaTeX), and I found it
really much harder work than it needed to be.  *Perhaps* it is not
harder work than is inherent in a macro language (or for that matter
things like Unix shells - complex shell scripts are pretty hard to get
right if you want to not have them blow up if you feed them something
unexpected, like strings with spaces in...), but, well, the answer to
that is not to use a macro language.  TeX isn't helped any by the
TeXbook, which was clearly written by someone with his head on upside
down.

The resource issues you mention are another problem, but not really
related.  

> This is not to say that TeX has no problems (I will
> come to that, but even so, it doesn't come close to
> satisfying any conditions to be called "a pile of
> crap").  

No, I don't think it's crap - indeed if you go back in this thread I
posted an article where I said that it's unequalled, in my opinion,
for typing mathematical stuff, both in quality of output, and in ease
of input.  I do think that TeX *as a programming language for
typesetting* could be very much better, and not (just) because of
resource issues.  I don't particularly want it to be like Lisp, I just
want it to be less painful.  In my opinion, the way to do that is by
having the programming language be a different thing than the stuff
you type in, so `macros' would be written in a completely separate
syntax than whatever you type.  I'm not even sure that I would change
the look-and-feel of the surface syntax at all: TeX is pleasant enough
to type text and maths into already.

> But you have to approach it on its own terms
> and with humility, instead of "Hey, this is not like my
> beloved ANSI Common Lisp."  

I'm not likely to do that - I knew TeX some time before I knew CL
(possibly before I knew any Lisp), indeed, I think I new TeX before
CLtL1 was published...

--tim
From: Gareth McCaughan
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <slrnb120tv.11v7.Gareth.McCaughan@g.local>
Tim Bradshaw wrote:

>  No, I think the problems of writing non-trivial sets of macros in TeX
>  (of which LaTeX is just one) are problems of TeX, not LaTeX.  I've
>  done such things (in plain TeX, as well as in LaTeX), and I found it
>  really much harder work than it needed to be.  *Perhaps* it is not
>  harder work than is inherent in a macro language (or for that matter
>  things like Unix shells - complex shell scripts are pretty hard to get
>  right if you want to not have them blow up if you feed them something
>  unexpected, like strings with spaces in...), but, well, the answer to
>  that is not to use a macro language.  TeX isn't helped any by the
>  TeXbook, which was clearly written by someone with his head on upside
>  down.

I loved (and still love) the TeXbook, but then my head is on
upside down too. The fact that I encountered it at an impressionable
age (16 or so) probably helps, too.

Writing non-trivial things in TeX is difficult for several
different reasons. It's all done with macro expansion (and,
just as when talking to C people it's necessary to point out
that Lisp's macros aren't the ones they're used to, I should
maybe mention for the benefit of non-TeXnicians that TeX's
macros are possibly even nastier than C's). The whole thing
is designed in this amazingly baroque way, where it seems
that every time Knuth found something TeX couldn't do he
added another random operator that enabled it to do it.
\expandafter, for instance. And, even once you're past the
pain of the *language*, the fact is that lots of things are
just really hard to do in terms of TeX's boxes-and-glue
model. (In something like the same way as programming in
a more declarative language is hard for people used to
programming in imperative languages. But, I think, worse.)
And there are those resource problems too, and the way that
lots of things are done in terms of magic numbers (which
is related to the resource limitation thing but not the
same problem).

But, dammit, TeX is still miles ahead of anything else that's
remotely as readily available when it comes to typesetting,
especially for mathematics.

>  No, I don't think it's crap - indeed if you go back in this thread I
>  posted an article where I said that it's unequalled, in my opinion,
>  for typing mathematical stuff, both in quality of output, and in ease
>  of input.  I do think that TeX *as a programming language for
>  typesetting* could be very much better, and not (just) because of
>  resource issues.  I don't particularly want it to be like Lisp, I just
>  want it to be less painful.  In my opinion, the way to do that is by
>  having the programming language be a different thing than the stuff
>  you type in, so `macros' would be written in a completely separate
>  syntax than whatever you type.  I'm not even sure that I would change
>  the look-and-feel of the surface syntax at all: TeX is pleasant enough
>  to type text and maths into already.

That would be interesting to experiment with.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Tim Bradshaw
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ey34r8uyxyi.fsf@cley.com>
* Gareth McCaughan wrote:

> I loved (and still love) the TeXbook, but then my head is on
> upside down too. The fact that I encountered it at an impressionable
> age (16 or so) probably helps, too.

I love it too, actually.  Indeed, I bought it before I had access to a
copy of TeX, and read it end to end.  It's a great book, but it's a
*terrible* reference manual.  I remember working out that if you
wanted to find the useful information on something (rather than a
forward reference, or a joke about it, or something which only
mentioned it in the most obscure way possible), you should look for
the third boldface index entry (or something, I forget the details).
That wouldn't work reliably, but it was definitely better than the
obvious approach of looking at the first (bold?) entry, or anything
like that.

Of course, the mistake CL made is not having a spec that arcane.

--tim
From: Dorai Sitaram
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <av4i74$koa$1@news.gte.com>
In article <···············@cley.com>, Tim Bradshaw  <···@cley.com> wrote:
>* Gareth McCaughan wrote:
>
>> I loved (and still love) the TeXbook, but then my head is on
>> upside down too. 
>
>I love it too, actually.  Indeed, I bought it before I had access to a
>copy of TeX, and read it end to end.  It's a great book, but it's a
>*terrible* reference manual.  I remember working out that if you
>wanted to find the useful information on something (rather than a
>forward reference, or a joke about it, or something which only
>mentioned it in the most obscure way possible), you should look for
>the third boldface index entry (or something, I forget the details).
>That wouldn't work reliably, but it was definitely better than the
>obvious approach of looking at the first (bold?) entry, or anything
>like that.
>
>Of course, the mistake CL made is not having a spec that arcane.

TeX is a domain-specific tool and the domain it
addresses is both complicated and has quite a bit of
history and (craft) tradition.  It is to be expected
that a manual for it will be difficult simply based on
the subject matter, if the manual stakes any claim at
all to comprehensiveness.  General-purpose programming
languages, on the other hand, are largely about
defining and manipulating structures internal to the
world that they create and offer, and there is a
strongly omphaloskeptic feedback loop to ensure that
the structures stay easy and easily described, or at
least satisfy some Ockhamesque criterion that makes
them elegant to describe cleanly.  Thus a CL spec has a
better shot at being unarcane than a TeX manual, even
when both of them are committed to going into gory
detail.  (Note, too, that the parts of CL that begin to
approach arcaneness have to do with pathnames and
Format, which involve interfacing with a messy outside
reality.)  

That the TeXbook even suggests a comparison, however
unfavorable, to a general-purpose programming language
spec, is a tribute to how well its author has tamed
TeX's domain for the purposes of casual
programmability. 
From: Tim Bradshaw
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <ey3wulli4xi.fsf@cley.com>
* Dorai Sitaram wrote:

> TeX is a domain-specific tool and the domain it
> addresses is both complicated and has quite a bit of
> history and (craft) tradition.  It is to be expected
> that a manual for it will be difficult simply based on
> the subject matter, if the manual stakes any claim at
> all to comprehensiveness.  

This is rubbish, sorry.  The TeXbook, and in particular its index, is
(are? am I referring to both?) just gratuitously weird and hard to
use.  I have read many other books in the area of typography (and
computer typesetting), and the TeXbook is by far the least useful.
This is not to say it is not a very good read - on the whole it is.

--tim
From: ozan s yigit
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <vi4adidola7.fsf@blue.cs.yorku.ca>
···@ashi.footprints.net (Kaz Kylheku) writes:

> Nothing epitomizes Knuth better than the obtuse instruction set
> architecture he designed for the programming examples and exercises in
> TAOCP.

many people don't realize TAOCP started in 1962, when he was only twenty
four. he was young and inexperienced, even if much brighter than most of
his critics...

>            ... Bizarre numeric representations, strange encodings, odd
> limitations. Blech. (Rhymes with TeX).

so what do you think of MMIX? any deep criticism of his newer 64-bit risc
architecture you want to share? see eg. mmixware, isbn 3-540-66938-8
in case you want to know more.

oz
---
bang go the blobs. -- ponder stibbons
From: Jens Axel S�gaard
Subject: Re: S-exp vs XML, HTML, LaTeX (was: Why lisp is growing)
Date: 
Message-ID: <3e1a3f9f$0$71702$edfadb0f@dread11.news.tele.dk>
ozan s yigit wrote:
> ···@ashi.footprints.net (Kaz Kylheku) writes:
>
>> Nothing epitomizes Knuth better than the obtuse instruction set
>> architecture he designed for the programming examples and exercises
>> in TAOCP.

> so what do you think of MMIX? any deep criticism of his newer 64-bit
> risc architecture you want to share? see eg. mmixware, isbn 3-540-
> 66938-8 in case you want to know more.

Knuth has put the description of MMIX online at

http://www-cs-faculty.stanford.edu/~knuth/mmix-news.html

--
Jens Axel S�gaard
From: Bulent Murtezaoglu
Subject: Re: Why lisp is growing
Date: 
Message-ID: <87bs3ktsp7.fsf@acm.org>
>>>>> "S" == stevesusenet  <············@yahoo.com> writes:
[...]
    S> 2. Having any tech savy person being able to list a number of
    S> popular applications written in lisp and having the people
    S> listening being already familiar with the names of those apps.
[...]

_Any_ tech-savvy person?  Ask people who consider themselves knowledgeable
on the internet about the most used MTA, HTTP server, DNS server etc.
and I highly doubt that they'll  be able to name sendmail, apache and 
bind.  (most likely answer "what's a MTA?", everybody uses IIS, whatever 
comes with Windows).  Why should Lisp be different?  Besides, what popular 
app is written in Java?  

I think that lack of convincing stories to tell to Eran Gatt's bosses at 
JPL is much more important than getting random two-bit techies to name apps 
written in Lisp.  

cheers,

BM