From: thelifter
Subject: A Critique of Common Lisp
Date: 
Message-ID: <b295356a.0307112010.6be66b43@posting.google.com>
Please read till the end and refrain from ad hominem attacks. You
don't need
to be like this particular individual:
http://groups.google.com/groups?q=g:thl309275826d&dq=&hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=874r5ox1c2.fsf%40bird.agharta.de

Here is a rather old paper(1984), 
but I'm including it here for those who might be interested(one of the
authors
of this paper was also a member of the Common lisp committee).

http://www.dreamsongs.com/Essays.html
http://www.dreamsongs.com/NewFiles/clcrit.pdf

Now before you flame, I like Lisp, in fact I think it is the most
powerful language that exists. The question here is about
implementation. What are the drawbacks of one particular
implementation, in this case CL.

This paper might help on deciding which implementations to use,
especially if you are considering scheme, or some other
implementation.

All in all I think anyone really interested in a language should also
know it's weaknesses.

I'm aware that not everything mentioned in the paper must be true
today. So I would welcome if the experts can point out where this is
the case.

Thanks for reading.

From: Adam Warner
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <pan.2003.07.12.07.19.02.433832@consulting.net.nz>
Hi thelifter,

> This paper [<http://www.dreamsongs.com/NewFiles/clcrit.pdf>] might help
> on deciding which implementations to use, especially if you are
> considering scheme, or some other implementation.
> 
> All in all I think anyone really interested in a language should also
> know it's weaknesses.
> 
> I'm aware that not everything mentioned in the paper must be true today.
> So I would welcome if the experts can point out where this is the case.

How about providing some intellectual effort to ignite the conversation?
You're aware that not everything mentioned in the paper must be true
today. So go through the paper and comment on some criticisms, discussing
whether you believe it was justifiable then or now. If you raise
interesting issues people will have something to talk about.

It's silly to propose others should consult this paper to decide whether
to use Scheme or another Lisp when (a) you acknowledge at the outset that
some items will not be true (today); and (b) expect only experts to be
able to tell you what's untrue.

Here's a start:

   A spirit of compromise left many issues undefined. Thus, although
   programs written in COMMON LISP would be transportable, it is trivial
   to write programs which look syntactically like they are written in
   COMMON LISP but which dynamically are not, and will not work in another
   COMMON LISP implementation.

   An example is (EQ 3 3): It is not specified whether this is true or
   false.

The effect of (eq 3 3) is left undefined because integers in general could
be bignums, requiring more complicated machine code to determine their
equivalence than a simple pointer-style comparison. One should simply use
EQL or = to compare integers. If one uses the correct operator portability
is assured. End of issue.

Later:

   An unwitting programmer might believe that the performance profile
   which has the LAMBDA application slower than the PROG form is universal
   over all machines. If the programmer realizes that this may not be the
   case, he can write a macro which initializes variables and then
   performs some computations. In some implementations this macro could
   expand into a LAMBDA application, while in others it could expand into
   a PROG form. The choice would then be machine dependent.

   The effect, of course, is that COMMON LISP code might be trivially
   transportable in terms of operational semantics, but not in terms of
   performance.

   Thus the robe of transportability hides the fact that programmers still
   need to program for their own implementations, and the programs so
   produced may not be transportable to the degree one might be led to
   believe.

Indeed there are extremely different performance characteristics between
Common Lisp implementations. One doesn't get much further apart than the
ways CLISP and CMUCL/SBCL are implemented. But the fact code can still run
unchanged between implementations is a huge positive. Being able to start
with semantically correct but slower code and then work upon optimising it
is far more useful to me than having faster but broken code to debug.

The problem with the authors' criticism is that the alternative is worse:
non-trivial transportability, i.e. using different languages.

Many of these criticisms invoke the "unwitting programmer". Perhaps Common
Lisp was designed for programmers with wit.

   The true test is that, even though the bulk of COMMON LISP was
   reasonably well- defined in the summer of 1982, there are still no
   widely-available COMMON LISP's, despite many large efforts (e.g. CMU
   Spice project, DEC COMMON LISP project, Rutgers COMMON LISP project,
   Symbolics COMMON LISP project, S-1 COMMON LISP project).

Today we're all very fortunate with the quality of (mostly conforming)
ANSI Common Lisp implementations that are available.

   4.1 Generic Arithmetic

   Currently there is no computer that excels at both Lisp processing and
   numeric processing. For people who want to do number-crunching in Lisp
   on such machines, writing type-specific numeric code requires wordy
   type-declarations. To write an addition of two integers requires
   writing either:

                          (DECLARE (integer x y))
                          (+ x y)
   or:
                     (+ (the integer x) (the integer y))

   Neither of these is esthetically pleasing, nor is it likely that the
   average Lisp programmer is going to bother all the time.

As I noted the other day, one can write macros that generate declarations
themselves if aesthetics are an issue.

   4.2 Arithmetic Explosion
   
   A large class of number types was introduced. These include 4 sizes of
   fixed precision floating point numbers, rational numbers, complex
   numbers made up of pairs of these (after long debate the committee at
   least restricted the types of the real and complex parts to be the
   same), fixed precision integers, and arbitrary precision integers. All
   these number types take up a large number of tag values and require
   extensive runtime support, as the types are orthogonal from the
   functions which operate on them. Besides addition, subtraction,
   multiplication and a few varieties of division, there are trigonometric
   functions, logarithmics, exponentials, hyperbolic functions, etc.

Bring it on. I love Common Lisp's extensive numeric support. I'm glad for
example that rationals are mandatory for conforming implementations.

   4.3 Function Calls Too Expensive

   Function calls are the hallmark of Lisp. Everyone expects that Lisp
   code will be sprinkled with many function calls. With the
   argument-passing styles defined by COMMON LISP, function calling must
   be expensive on most stock hardware.

This is not an important issue in 2003. I probably wish functional calls
were even more extensive (so that, e.g., one could know whether a keyword
was supplied with a nil value or omitted; or keywords could be written
before plain arguments [so e.g. both (a :href "URI" "Text") and (a "Text"
:href "URI") would be legal without having to resort to custom parsing of
(&rest args)]. The cost of Lisp's function calls aren't an issue today
(and were probably a flimsy issue in 1984).

   4.4 Computer Size

   COMMON LISP requires a large computer memory. To support only the
   function names and keyword names alone requires many kilobytes. To
   support the many copies of keyword- taking functions to improve the
   speed of COMMON LISP on stock hardware would require megabytes of
   memory. The compiled code for functions is large because of the amount
   of function-call overhead at the start to accomodate both possible
   keyword and optional arguments and also the inline type dispatches.

In the 21st century many PDAs have more memory than workstations of the
1980s or even 1990s.

   4.5 Good Ideas Corrupted

   Although this idea is simple, COMMON LISP took FORMAT to its ultimate
   generalization. With FORMAT it is possible to do iteration, conditional
   expressions, justification, non-local returns within iterations, and
   pluralization of words according to the values for the slots. The
   language used by FORMAT is not Lisp, but a language that uses strings
   with special escape characters that either represent substitutions,
   substitutions with specific printing control, or commands. This
   language is somewhat large by itself, but the main problem is that it
   isn't Lisp. Many language designers believe that the diction for
   expressing similar constructions should be similar. With FORMAT the
   COMMON LISP committee abandoned this goal.

Again, bring it on. Format is extremely useful for text output. And the
best indication it didn't turn out to be a problem is that Lisp programs
still look like Lisp and haven't been subsumed by evil format strings.

   4.6 Sequences
   
   Lists and one-dimensional arrays have fundamentally different access
   time characteristics (constant for arrays, linear for lists). They have
   even greater fundamentally different characteristics for being
   shortened and lengthened. We believe that trying to hide these
   differences behind generic function names will be a source of
   programmer overhead and will lead to inefficient programs.

I don't find it difficult to understand the performance characteristics of
using generic operators on lists or vectors. It certainly isn't a source
of my own "programmer overhead". The argument of any substance is that
generic operators have a run-time cost if detailed declarations are not
used (next paragraph). This is the cost of many generic operations, and it
is a cost that most Lisp users accept in order to enjoy the convenience of
a high level language.

   4.8 Subsets of COMMON LISP
   
   We believe that these points argue for a standard subset of COMMON
   LISP, especially for small, stock hardware implementations. The COMMON
   LISP committee has not defined a standard subset, and it appears that
   at least one subset, if not many such subsets, will appear. Unless
   these subsets are co-ordinated and widely adopted, then we will see one
   of the major goals of COMMON LISP--transportability--lost.

I am not looking for a defined subset of ANSI Common Lisp. I desire the
ability to use fully conforming implementations. Programmers in general
don't go around demanding less functionality so it's easier for someone
else to write an interpreter or compiler.

Regards,
Adam
From: Matthew Danish
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <20030712075903.GS17568@lain.mapcar.org>
On Sat, Jul 12, 2003 at 07:19:04PM +1200, Adam Warner wrote:
> This is not an important issue in 2003. I probably wish functional calls
> were even more extensive (so that, e.g., one could know whether a keyword
> was supplied with a nil value or omitted; 

(defun foo (&key (test nil test-supplied-p))
  (format t "~&:test supplied? ~A~%" test-supplied-p))

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Adam Warner
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <pan.2003.07.12.08.40.33.433800@consulting.net.nz>
Hi Matthew Danish,

> On Sat, Jul 12, 2003 at 07:19:04PM +1200, Adam Warner wrote:
>> This is not an important issue in 2003. I probably wish functional
>> calls were even more extensive (so that, e.g., one could know whether a
>> keyword was supplied with a nil value or omitted;
> 
> (defun foo (&key (test nil test-supplied-p))
>   (format t "~&:test supplied? ~A~%" test-supplied-p))

Thanks Matthew. One day I'll speculate as to the meaning of life in c.l.l.
and someone will instantly respond with the answer.

Turns out the section in the HyperSpec to consult is 3.4 Lambda Lists, in
particular 3.4.1 Ordinary Lambda Lists and 3.4.1.4 Specifiers for keyword
parameters:

[&key {var | ({var | (keyword-name var)} 
              [init-form [supplied-p-parameter]])}* [&allow-other-keys]

Cutting out the unused syntax we see your usage:
(&key (var init-form supplied-p-parameter))

"If no such argument pair exists, then the init-form for that specifier is
evaluated and the parameter variable is bound to that value (or to nil if
no init-form was specified). supplied-p-parameter is treated as for
&optional parameters: it is bound to true if there was a matching argument
pair, and to false otherwise."

Many thanks,
Adam
From: Matthew Danish
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <20030712093113.GT17568@lain.mapcar.org>
On Sat, Jul 12, 2003 at 08:40:36PM +1200, Adam Warner wrote:
> Thanks Matthew. One day I'll speculate as to the meaning of life in c.l.l.
> and someone will instantly respond with the answer.

With the immediate consequence of replacing this Universe with one that
is far more strange and mysterious, of course.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Julian St.
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <86znjkhw5z.fsf@jmmr.no-ip.com>
Matthew Danish <·······@andrew.cmu.edu> writes:

> On Sat, Jul 12, 2003 at 08:40:36PM +1200, Adam Warner wrote:
>> Thanks Matthew. One day I'll speculate as to the meaning of life in c.l.l.
>> and someone will instantly respond with the answer.
>
> With the immediate consequence of replacing this Universe with one that
> is far more strange and mysterious, of course.

You mean, that we'll become comp.lang.perl?

Regards,
Julian
-- 
	Don't keep up with the Joneses. Drag them down to your level.
From: Kent M Pitman
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <sfwoezzhkhj.fsf@shell01.TheWorld.com>
Adam Warner <······@consulting.net.nz> writes:

> It's silly to propose others should consult this paper to decide whether
> to use Scheme or another Lisp when (a) you acknowledge at the outset that
> some items will not be true (today);

Since it is about, effectively, a different language.

> and (b) expect only experts to be able to tell you what's untrue.

> Here's a start:
> 
>    A spirit of compromise left many issues undefined.

The important omissions to which were missed in the 1984 spec were in
many cases addressed in the ANSI CL specification, which resulted in a
spec that had a different degree of specificity and that patched huge
numbers of "loopholes" left open by CLTL.

>    Thus, although
>    programs written in COMMON LISP would be transportable, it is trivial
>    to write programs which look syntactically like they are written in
>    COMMON LISP but which dynamically are not, and will not work in another
>    COMMON LISP implementation.
> 
>    An example is (EQ 3 3): It is not specified whether this is true or
>    false.

For varying standards of "look syntatically like", I guess.
My standard is that if you know that (EQ 3 3) is not defined,
then it does not syntactically look like something that will transport.
YMMV.

> The effect of (eq 3 3) is left undefined because integers 

to include 3, incidentally, since I think at least one implementation
wanted bignums only, that is, a uniform representation for all integers.
This did create a legitimate question about what to make MOST-POSITIVE-FIXNUM.

The REAL problem with this paper (which I remember well) is that it was
never answered.  I suspect it was never answered because (a) it was not 
taken seriously by all [I didn't take it seriously, but then at that time
I don't know if anyone took  me seriously either], (b) a reply probably would
not have been publishable in the same proceedings [which were at that time
already heavily anti-lisp and I subjectively believe more prone to publish
an anti-CL paper than a pro-CL paper], (c) a reply would not have been 
appropriate because that kind of paper doesn't belong in such proceedings
any more than the original paper did, and (d) by the time a reply could have
been published it was a year later and rather than publish papers, people
were already trying to _act_ by working on formation of a new committee.

> Later:
> 
>    An unwitting programmer might believe

This kind of phrasing is the earmark of an unscientific paper, IMO, and
is a strong reason why I didn't like the paper.

I much prefer citing the actual programmer.  At least my "cleanup issues"
for ANSI CL that followed said that _I_ had had a problem with x or y or z
porting issue and that _I_ felt the language would be better with a change.
Hypothesizing a programmer doesn't tell you whether this happened and the
person is ashamed, or the person is just claiming that in theory it could
happen.

>    that the performance profile
>    which has the LAMBDA application slower than the PROG form is universal
>    over all machines. If the programmer realizes that this may not be the
>    case, he can write a macro which initializes variables and then
>    performs some computations. In some implementations this macro could
>    expand into a LAMBDA application, while in others it could expand into
>    a PROG form. The choice would then be machine dependent.
> 
>    The effect, of course, is that COMMON LISP code might be trivially
>    transportable in terms of operational semantics, but not in terms of
>    performance.
> 
>    Thus the robe of transportability hides the fact that programmers still
>    need to program for their own implementations, and the programs so
>    produced may not be transportable to the degree one might be led to
>    believe.

For the case cited, I'd bet that anyone who ever fussed over whether LAMBDA
application or PROG was faster and wasted days worrying about it had their
company go out of business not because of the speed issue but because of an
overarching "lack of focus on the end-user need" issue.  No wonder we had
AI winter.
 
> The problem with the authors' criticism is that the alternative is worse:
> non-trivial transportability, i.e. using different languages.
> 
> Many of these criticisms invoke the "unwitting programmer". Perhaps Common
> Lisp was designed for programmers with wit.
> 
>    The true test is that, even though the bulk of COMMON LISP was
>    reasonably well- defined in the summer of 1982, there are still no
>    widely-available COMMON LISP's, despite many large efforts (e.g. CMU
>    Spice project, DEC COMMON LISP project, Rutgers COMMON LISP project,
>    Symbolics COMMON LISP project, S-1 COMMON LISP project).
> 
> Today we're all very fortunate with the quality of (mostly conforming)
> ANSI Common Lisp implementations that are available.

Indeed.  And to the extent that there were implementations for them to try,
that wouldn't have proved much about implementations either, since robust
implementations that really reflect deep understanding of language issues 
take years to evolve.  Rev 0 of nearly any product is junk.

>    4.1 Generic Arithmetic
> 
>    Currently there is no computer that excels at both Lisp processing and
>    numeric processing. For people who want to do number-crunching in Lisp
>    on such machines, writing type-specific numeric code requires wordy
>    type-declarations. To write an addition of two integers requires
>    writing either:
> 
>                           (DECLARE (integer x y))
>                           (+ x y)
>    or:
>                      (+ (the integer x) (the integer y))
> 
>    Neither of these is esthetically pleasing, nor is it likely that the
>    average Lisp programmer is going to bother all the time.
> 
> As I noted the other day, one can write macros that generate declarations
> themselves if aesthetics are an issue.

Further, as has come to be seen, you don't _need_ to write this everywhere,
but it is commonly done and well worth the time in mission-critical tight
inner loops.

>    4.2 Arithmetic Explosion
>    
>    A large class of number types was introduced. These include 4 sizes of
>    fixed precision floating point numbers, rational numbers, complex
>    numbers made up of pairs of these (after long debate the committee at
>    least restricted the types of the real and complex parts to be the
>    same), fixed precision integers, and arbitrary precision integers. All
>    these number types take up a large number of tag values and require
>    extensive runtime support, as the types are orthogonal from the
>    functions which operate on them. Besides addition, subtraction,
>    multiplication and a few varieties of division, there are trigonometric
>    functions, logarithmics, exponentials, hyperbolic functions, etc.
> 
> Bring it on. I love Common Lisp's extensive numeric support. I'm glad for
> example that rationals are mandatory for conforming implementations.
> 
>    4.3 Function Calls Too Expensive
> 
>    Function calls are the hallmark of Lisp. Everyone expects that Lisp
>    code will be sprinkled with many function calls. With the
>    argument-passing styles defined by COMMON LISP, function calling must
>    be expensive on most stock hardware.
> 
> This is not an important issue in 2003.

I think this was a specific criticism levied for CL vs the VAX, since I
think there was no way to use the native VAX function call frame in the
intended way to support CL.  (Whether that's a criticism of VAX or CL is
subjective, of course.)  But what this really says, again, is that rather
than "unwitting implementation" or "unwitting programmer" they should have
been naming names so that we could know what they really meant.  It worked
fine on the LispM hardware, which in 1984 no one could have been 100% sure
wouldn't end up becoming "stock hardware".  It was marketing, not technology,
that led elsewhere, IMO, though some might disagree.

> I probably wish functional calls
> were even more extensive (so that, e.g., one could know whether a keyword
> was supplied with a nil value or omitted; or keywords could be written
> before plain arguments [so e.g. both (a :href "URI" "Text") and (a "Text"
> :href "URI") would be legal without having to resort to custom parsing of
> (&rest args)]. The cost of Lisp's function calls aren't an issue today
> (and were probably a flimsy issue in 1984).

Yeah, I've done this for some embedded languages. My fave is to have
each arg have a name and a position, where position really specifies
order, so that when you parse them you first grab the keys and mark
the corresponding ordered args as filled, then you make a post-pass
grabbing any remaining positionals.  This allows you to do things like
        give to john the ball
  and   give john the ball 

by 
  (defun give (dobj (iobj :key to)) ...)

Nevertheless, I agree with you on the fundamental point that CL was proper
in not just looking at present day conventions.  It planned for the future,
and got enough things right to do fairly well so far.  Not that there's
not more future to come...

>    4.4 Computer Size
> 
>    COMMON LISP requires a large computer memory. To support only the
>    function names and keyword names alone requires many kilobytes. To
>    support the many copies of keyword- taking functions to improve the
>    speed of COMMON LISP on stock hardware would require megabytes of
>    memory. The compiled code for functions is large because of the amount
>    of function-call overhead at the start to accomodate both possible
>    keyword and optional arguments and also the inline type dispatches.
> 
> In the 21st century many PDAs have more memory than workstations of the
> 1980s or even 1990s.

Even in 1984 we knew this would be so.  People knew even of Moore's law
somewhere thenabouts, but didn't trust it as well.  But people wanted tools
of "today" not just tomorrow.  They were short-sighted, I think. Because
most of what they wanted to do "only for today" would be discarded and lost,
while the things they did with some planning for tomorrow have survived...

I stopped programming "only for today" the first time I had to do a Macsyma
port and I had to basically throw away all the machine-specific cool stuff
I'd written for a processor that didn't seem to be surviving.  No matter how
cool it was, I realized that survival and dependable availability was more 
cool.  And I took on a different notion of "what mattered".

>    4.5 Good Ideas Corrupted
> 
>    Although this idea is simple, COMMON LISP took FORMAT to its ultimate
>    generalization. With FORMAT it is possible to do iteration, conditional
>    expressions, justification, non-local returns within iterations, and
>    pluralization of words according to the values for the slots. The
>    language used by FORMAT is not Lisp, but a language that uses strings
>    with special escape characters that either represent substitutions,
>    substitutions with specific printing control, or commands. This
>    language is somewhat large by itself, but the main problem is that it
>    isn't Lisp. Many language designers believe that the diction for
>    expressing similar constructions should be similar. With FORMAT the
>    COMMON LISP committee abandoned this goal.
> 
> Again, bring it on. Format is extremely useful for text output. And the
> best indication it didn't turn out to be a problem is that Lisp programs
> still look like Lisp and haven't been subsumed by evil format strings.

Special purpose languages were historically needed to put I/O in its place.
The fact is that Maclisp programs were so dominated by I/O that you couldn't
even find the algorithms.  Programs looked like:

 (PROGN (TERPRI)
        (PRINC "Hello, ")
        (PRINC USER)
        (PRINC ".")
        (TERPRI))

It was awful.  People who grumbled about FORMAT were not people who did
much I/O in the first place, IMO.

>    4.6 Sequences
>    
>    Lists and one-dimensional arrays have fundamentally different access
>    time characteristics (constant for arrays, linear for lists). They have
>    even greater fundamentally different characteristics for being
>    shortened and lengthened. We believe that trying to hide these
>    differences behind generic function names will be a source of
>    programmer overhead and will lead to inefficient programs.
> 
> I don't find it difficult to understand the performance characteristics of
> using generic operators on lists or vectors. It certainly isn't a source
> of my own "programmer overhead". The argument of any substance is that
> generic operators have a run-time cost if detailed declarations are not
> used (next paragraph). This is the cost of many generic operations, and it
> is a cost that most Lisp users accept in order to enjoy the convenience of
> a high level language.

This was a very legit criticism but was lost in a sea of non-legit ones.
(It was also probably not worth a paper on its own, though.)

>    4.8 Subsets of COMMON LISP
>    
>    We believe that these points argue for a standard subset of COMMON
>    LISP, especially for small, stock hardware implementations. The COMMON
>    LISP committee has not defined a standard subset, and it appears that
>    at least one subset, if not many such subsets, will appear. Unless
>    these subsets are co-ordinated and widely adopted, then we will see one
>    of the major goals of COMMON LISP--transportability--lost.
> 
> I am not looking for a defined subset of ANSI Common Lisp. I desire the
> ability to use fully conforming implementations. Programmers in general
> don't go around demanding less functionality so it's easier for someone
> else to write an interpreter or compiler.

The problem is that everyone wants a different subset.

In the Macsyma group at Symbolics once, we had a customer that purchased
Macsyma and wanted a _discount_ for not using all of its features.
And we were stupid enough to give one.    (I wasn't consulted to tell
them how idiotic that was.  NO ONE uses the full language, and it's more,
not less expensive, to track that.)  We ended up remob'ing (uninterning)
all the symbols they didn't use and selling them the key to turning 
individual symbols back on when they realized they needed "extras".
It was awful.

Subsets are like the problem of "prayer in school".  Everyone likes
the idea until it gets down to saying whose prayer.  Then suddenly the
details of the commonality matter, and then we realize that everyone
wants something simple but not all want the same simple thing.
From: Adam Warner
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <pan.2003.07.12.15.45.36.275492@consulting.net.nz>
Hi Kent M Pitman,

> In the Macsyma group at Symbolics once, we had a customer that purchased
> Macsyma and wanted a _discount_ for not using all of its features. And
> we were stupid enough to give one.    (I wasn't consulted to tell them
> how idiotic that was.  NO ONE uses the full language, and it's more, not
> less expensive, to track that.)  We ended up remob'ing (uninterning) all
> the symbols they didn't use and selling them the key to turning
> individual symbols back on when they realized they needed "extras". It
> was awful.

Marvellous commentary and historical context thanks Kent. This situation
is funny and fascinating. Selling Lisp by the interned symbol is perhaps
the most extensive example of software product and price differentiation
ever implemented.

Imagine the one day specials: Symbolics Lisp only $49.95.*

A sensible amount of product differentiation is typically a
profit-increasing strategy that allows a business to capture more consumer
surplus from customers.

<http://www.microsoft.com/office/evaluation/fastfacts.asp#header2>
Not only are the products differentiated by functionality but also by
OEM, full retail, upgrade retail, individual product and volume pricing.

Regards,
Adam

* #'CAR optional extra.
From: Scott L. Burson
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <vhptql1silepd4@news.supernews.com>
Adam Warner wrote:

> [...]  I probably wish functional calls
> were even more extensive (so that, e.g., one could know whether a keyword
> was supplied with a nil value or omitted [...])

You can know this!  This works:

   (defun foo (&key (x nil x-p)) (and x-p (list x)))
   (foo)  =>  NIL
   (foo :x nil)   =>  (NIL)

HyperSpec section 3.4.1, "Ordinary Lambda Lists".

-- Scott

-- 
To send email, remove uppercase characters from address in header.
From: Kent M Pitman
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <sfwvftudz4m.fsf@shell01.TheWorld.com>
"Scott L. Burson" <········@AsMympoiesis.com> writes:

> Adam Warner wrote:
> 
> > [...]  I probably wish functional calls
> > were even more extensive (so that, e.g., one could know whether a keyword
> > was supplied with a nil value or omitted [...])
> 
> You can know this!  This works:
> 
>    (defun foo (&key (x nil x-p)) (and x-p (list x)))
>    (foo)  =>  NIL
>    (foo :x nil)   =>  (NIL)
> 
> HyperSpec section 3.4.1, "Ordinary Lambda Lists".

Indeed.  Though note that this is not hard to know even otherwise.
In effect, the built-in syntax doesn't supply a missing capability,
it standardizes a notation for doing something that would otherwise
need to be done idiomatically:

 (defvar *missing* (list '*missing*))

 (defun foo (&key (x *missing*))
   (if (eq x *missing*) 'missing 'given))

 (foo)        => MISSING
 (foo :x nil) => GIVEN

The only way to make it impossible to tell whether something is given
or not wouldl be to remove the ability to specify a default, since
the default can always be arranged to be unique in some appropriate way.
From: Kenny Tilton
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F1DF923.4000201@nyc.rr.com>
Kent M Pitman wrote:
> "Scott L. Burson" <········@AsMympoiesis.com> writes:
> 
> 
>>Adam Warner wrote:
>>
>>
>>>[...]  I probably wish functional calls
>>>were even more extensive (so that, e.g., one could know whether a keyword
>>>was supplied with a nil value or omitted [...])
>>
>>You can know this!  This works:
>>
>>   (defun foo (&key (x nil x-p)) (and x-p (list x)))
>>   (foo)  =>  NIL
>>   (foo :x nil)   =>  (NIL)
>>
>>HyperSpec section 3.4.1, "Ordinary Lambda Lists".
> 
> 
> Indeed.  Though note that this is not hard to know even otherwise.
> In effect, the built-in syntax doesn't supply a missing capability,
> it standardizes a notation for doing something that would otherwise
> need to be done idiomatically:
> 
>  (defvar *missing* (list '*missing*))
> 
>  (defun foo (&key (x *missing*))
>    (if (eq x *missing*) 'missing 'given))
> 
>  (foo)        => MISSING
>  (foo :x nil) => GIVEN

Bad example:

    (foo :x *missing*) => MISSING

Better:

  (let ((missing (gensym)))
    (defun better(&key (x missing))
      (if (eq x missing) 'missing 'given)))

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Kent M Pitman
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <sfw7k69x1jr.fsf@shell01.TheWorld.com>
Kenny Tilton <·······@nyc.rr.com> writes:

> Bad example:
> 
>     (foo :x *missing*) => MISSING
 
I don't think this is an interesting case.

People intent on shooting themselves in the foot generally can.

> Better:
> 
>   (let ((missing (gensym)))
>     (defun better(&key (x missing))
>       (if (eq x missing) 'missing 'given)))

It's true this makes it more bulletproof, but it also requires either
a smarter compiler (to avoid an unnecessary closure by doing flow analysis
enough to realize that constant folding is possible.  I could also have
done
 (defun better-still (&key (x '#1=#:missing))
   (if (eq x '#1#) 'missing 'given))
and avoided that, if that had been my goal, but
 (a) the code would not have been as perspicuous
 (b) I'd have to do this trick per-function, rather than being able
     to use the same *missing* in all functions in that package,
     which with appropriate common sense is easy to do
 (c) what you say provides no additional "functionality" and arguably
     increases the complexity of doing things right by a lot, perhaps
     more than the bug you're avoiding... it sets you up for more bugs.
     ... not the least of which is that several popular text editors won't
     be able to use Meta-. to find that definition as you've written it.
From: Kenny Tilton
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F1E5F5A.6090200@nyc.rr.com>
Kent M Pitman wrote:
> Kenny Tilton <·······@nyc.rr.com> writes:
> 
> 
>>Bad example:
>>
>>    (foo :x *missing*) => MISSING
> 
>  
> I don't think this is an interesting case.

agreed, i was just idly curious as to how one could /perfectly/ 
replicate the supplied-p mechanism.

> 
> People intent on shooting themselves in the foot generally can.
> 
> 
>>Better:
>>
>>  (let ((missing (gensym)))
>>    (defun better(&key (x missing))
>>      (if (eq x missing) 'missing 'given)))
> 
> 
> It's true this makes it more bulletproof, but it also requires either
> a smarter compiler (to avoid an unnecessary closure by doing flow analysis
> enough to realize that constant folding is possible.  I could also have
> done
>  (defun better-still (&key (x '#1=#:missing))
>    (if (eq x '#1#) 'missing 'given))

cool. i do not even know what '#1=#:missing is!

> and avoided that, if that had been my goal, but
>  (a) the code would not have been as perspicuous
>  (b) I'd have to do this trick per-function, rather than being able
>      to use the same *missing* in all functions in that package,
>      which with appropriate common sense is easy to do
>  (c) what you say provides no additional "functionality" and arguably
>      increases the complexity of doing things right by a lot, perhaps
>      more than the bug you're avoiding... it sets you up for more bugs.
>      ... not the least of which is that several popular text editors won't
>      be able to use Meta-. to find that definition as you've written it.


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Kalle Olavi Niemitalo
Subject: #1=#:missing (was: A Critique of Common Lisp)
Date: 
Message-ID: <87znj5xw2o.fsf_-_@Astalo.kon.iki.fi>
Kenny Tilton <·······@nyc.rr.com> writes:

> cool. i do not even know what '#1=#:missing is!

#:missing is an uninterned symbol; #1=#:missing gives it the
number 1 so that #1# can refer to it later in the same
s-expression.

CL can print things in this syntax, too:

  (let ((*print-circle* t)
        (*print-gensym* t)
        (sym (gensym)))
    (print (list sym sym)))

prints (#1=#:G1121 #1#), and returns (#:G1121 #:G1121), which is
the same list printed without *PRINT-CIRCLE*.
From: Christophe Rhodes
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <sq8yqpab41.fsf@lambda.jcn.srcf.net>
Kenny Tilton <·······@nyc.rr.com> writes:

> Bad example:
>
>     (foo :x *missing*) => MISSING
>
> Better:
>
>   (let ((missing (gensym)))
>     (defun better(&key (x missing))
>       (if (eq x missing) 'missing 'given)))

Just as bad, actually.

* (let ((missing (gensym))) 
    (defun foo (&key (x missing)) 
      (if (eq x missing) 'missing 'given)))

FOO
* (foo :x (sb-kernel:%closure-index-ref #'foo 0))

MISSING

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Kenny Tilton
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F1E5FCF.20106@nyc.rr.com>
Christophe Rhodes wrote:
> Kenny Tilton <·······@nyc.rr.com> writes:
> 
> 
>>Bad example:
>>
>>    (foo :x *missing*) => MISSING
>>
>>Better:
>>
>>  (let ((missing (gensym)))
>>    (defun better(&key (x missing))
>>      (if (eq x missing) 'missing 'given)))
> 
> 
> Just as bad, actually.
> 
> * (let ((missing (gensym))) 
>     (defun foo (&key (x missing)) 
>       (if (eq x missing) 'missing 'given)))
> 
> FOO
> * (foo :x (sb-kernel:%closure-index-ref #'foo 0))
> 
> MISSING

yikes!

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Yarden Katz
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <86y8z3lyfu.fsf@underlevel.net>
·········@gmx.net (thelifter) writes:

> Please read till the end and refrain from ad hominem attacks. You
> don't need
> to be like this particular individual:
> http://groups.google.com/groups?q=g:thl309275826d&dq=&hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=874r5ox1c2.fsf%40bird.agharta.de
>
> Here is a rather old paper(1984), 
> but I'm including it here for those who might be interested(one of the
> authors
> of this paper was also a member of the Common lisp committee).
>
> http://www.dreamsongs.com/Essays.html
> http://www.dreamsongs.com/NewFiles/clcrit.pdf
[snip]

The most interesting thing about this paper was the reference to
Donald Knuth's "Fourteen Hallmarks of a Good Programming Language."

It is written that this text is unpublished.  Does anyone know if a
copy is still available in some form?
-- 
Yarden Katz <····@underlevel.net>  |  Mind the gap
From: thelifter
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <b295356a.0307151844.3333a75d@posting.google.com>
First of all Adam Warner, you where right, I should have tried to
ignite the conversation. In fact, I posted before even reading the
entire document. I found that the beginning of it already contained
some interesting points which would be worthwhile discussing. Due to
time constraints I'm only writing my point of view now, AFTER having
read the other replies so far.

I want to discuss the points raised at the beginning of the paper(you
can skip to my personal summary at the end if you are in a hurry):

"1. Intellectual Conciseness. A language should be small enough for a
programmer
to be able keep all of it in mind when programming. This can be
achieved through
regularity in the design and a disciplined approach to the
introduction of primitives
and new programming concepts. PASCAL is a good example of such a
language,
although it is perhaps too concise."

Is this a valid point? I'm in doubt. I tend to agree with what I read
here on the group: use only the subset of the language you
need/understand, use/learn more of it when you need more.
Although having a simple language certainly contributes to the ease of
learning. A 1000 page manual somehow is discouraging.

"2. Compiler Simplicity. It should be possible to write a compiler
which produces
very efficient code, and it should not be too great an undertaking to
write such a
compiler perhaps a man-year at the outside."

I think Lisp fails here. How long does it take to write a good CL
compiler/interpreter/system? How long does it take for Scheme?

My personal Summary:

1) Lisp is too large, it cannot be implemented very efficiently or
better: it can't be
implemented easyly.

At first this doesn't seem negative from the user's point of view. For
the user
the more powerful a language the better. The ideal implementation
should have all functionality that a user could possibly need. The
drawback is that such an implementation would never be finished.
So maybe the Scheme way is better, that is, make a small, efficient,
easy to implement core language and put the rest into packages.

For example why does a Hashtable need to be part of the language
definition. No other language I know has it, everywhere Hashtables are
provided as additional packages.

If you put most stuff into packages, if the user doesn't need a
hashtable, he can work with a smaller language. Just add what you
need.

Inspite of being so large, Lisp doesn't have a standard way of doing
GUIs, so usually most commercial Lisps have their own way of doing so,
and this generates incompatibilities. Wouldn't it be easier to have
this in a package, that would be implemented to run on any Lisp?

The bigger the core language the more possibilities for
incompatibilities across implementations.

The practical side shows that Scheme seems to get it right. AFAIK
there are much more Scheme implementations than CL implementations,
and it is very easy to make one if it should be needed. Once you have
a Scheme you can use all the packages that conform to the standard. So
in principle you should have the power of CL but with much more
simplicity in the core language.

For example, need a Lisp to work with Java? I know three Scheme
implementations: kawa, jscheme, and one from IBM which name I forgot.
How many CLs that work with Java are there? AFAIK not one, and this is
not likely to change, because CL is so HUGE. It would take a LONG time
for someone to implement it. To be honest it would probably need more
than one person to do it.

I think this also shows that the BIG language core is indeed a
disadvantage not only for the implementor but also for the user. BIG
core means less implementations, which means less programs, packages
written for it, which means less users/uses, which means less
implementations, etc...

Viewing from the point of view of the implementor a HUGE language core
has its advantages of course: once you have written an implementation
any startup will have to work very hard to catch up. Meanwhile you can
add proprietary extensions so that your customers are unlikely to
change to another implementation.

Thanks for reading.
Comments welcome, please no ad hominem attacks.
From: Greg Menke
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <m3isq24p80.fsf@europa.pienet>
·········@gmx.net (thelifter) writes:

> I want to discuss the points raised at the beginning of the paper(you
> can skip to my personal summary at the end if you are in a hurry):
> 
> "1. Intellectual Conciseness. A language should be small enough for a
> programmer
> to be able keep all of it in mind when programming. This can be
> achieved through
> regularity in the design and a disciplined approach to the
> introduction of primitives
> and new programming concepts. PASCAL is a good example of such a
> language,
> although it is perhaps too concise."
> 
> Is this a valid point? I'm in doubt. I tend to agree with what I read
> here on the group: use only the subset of the language you
> need/understand, use/learn more of it when you need more.
> Although having a simple language certainly contributes to the ease of
> learning. A 1000 page manual somehow is discouraging.

Then don't use the manual.  Write code and learn the language instead.
If your intellectual comfort level is exceeded by having to learn more
than x functions, then by all means stop learning there and only use
them.  OTOH, I'm exceedingly glad there is so much language headroom
with CL because it also means you can find a concise way of doing so
many little tasks that otherwise must be handled manually.


> 
> "2. Compiler Simplicity. It should be possible to write a compiler
> which produces
> very efficient code, and it should not be too great an undertaking to
> write such a
> compiler perhaps a man-year at the outside."
> 
> I think Lisp fails here. How long does it take to write a good CL
> compiler/interpreter/system? How long does it take for Scheme?

How long for C?  Good compilers will <always> take a lot of time.
Even if the higher level language is conceptually easy to break down,
getting the hardware-specific stuff right is tough.  Have you ever
written one?  

Is a handheld multi-meter with a frequency counter "better" than an
oscilloscope because its simpler, faster and easier to build?

> 
> My personal Summary:
> 
> 1) Lisp is too large, it cannot be implemented very efficiently or
> better: it can't be
> implemented easyly.

How is it too large?  Do you have a basis for your efficiency claim?
Its true CL can't be implemented easily, thats because its a subtly
complicated environment.  Big, complicated jobs require complicated
tools- and if you don't use them, you end up paying for it in labor.

> 
> At first this doesn't seem negative from the user's point of view. For
> the user
> the more powerful a language the better. The ideal implementation
> should have all functionality that a user could possibly need. The
> drawback is that such an implementation would never be finished.
> So maybe the Scheme way is better, that is, make a small, efficient,
> easy to implement core language and put the rest into packages.
> 
> For example why does a Hashtable need to be part of the language
> definition. No other language I know has it, everywhere Hashtables are
> provided as additional packages.
> 
> If you put most stuff into packages, if the user doesn't need a
> hashtable, he can work with a smaller language. Just add what you
> need.

If a hashtable isn't used, its not contributing complexity.  The code
isn't in a distributed executable when its properly tree-shaken.  How
is Common Lisp too large?  What is too large?  How is "too large" a
problem?  You've made claims about efficiency suffering somehow,
please give data.  Microsoft Developer studio takes up nearly an order
of magnitude more disk space than Lispworks does- does this make MSVC
nearly 10 times "worse"?


> 
> Inspite of being so large, Lisp doesn't have a standard way of doing
> GUIs, so usually most commercial Lisps have their own way of doing so,
> and this generates incompatibilities. Wouldn't it be easier to have
> this in a package, that would be implemented to run on any Lisp?

Lispworks has CAPI which is cross platform- more "standard" than what
Windows offers- and its a heck of a lot better designed than MFC.
Though a little dated apparently, CLIM offers a compatible GUI
framework across implementations.  Its true the free Common Lisps
don't have anything on hand like that, but this is an implementation
issue not a language issue.

> 
> The bigger the core language the more possibilities for
> incompatibilities across implementations.
> 

Not if there is a standard that defines minimum behavior.


> The practical side shows that Scheme seems to get it right. AFAIK
> there are much more Scheme implementations than CL implementations,
> and it is very easy to make one if it should be needed. Once you have
> a Scheme you can use all the packages that conform to the standard. So
> in principle you should have the power of CL but with much more
> simplicity in the core language.

AFAICT, Scheme "gets it right" by only solving the simple problems.
You have yet to say why CL is "complex".  Theres a bazillion
functions, but that doesn't make it complex- you only need learn what
you're comfortable with- the same, simple syntax rules apply from top
to bottom.

> 
> For example, need a Lisp to work with Java? I know three Scheme
> implementations: kawa, jscheme, and one from IBM which name I forgot.
> How many CLs that work with Java are there? AFAIK not one, and this is
> not likely to change, because CL is so HUGE. It would take a LONG time
> for someone to implement it. To be honest it would probably need more
> than one person to do it.

Please define "work with Java".

> 
> I think this also shows that the BIG language core is indeed a
> disadvantage not only for the implementor but also for the user. BIG
> core means less implementations, which means less programs, packages
> written for it, which means less users/uses, which means less
> implementations, etc...

You've made a series of claims, but its nothing more than personal
opinions at this point.  How much Common Lisp code have you written?


> Viewing from the point of view of the implementor a HUGE language core
> has its advantages of course: once you have written an implementation
> any startup will have to work very hard to catch up. Meanwhile you can
> add proprietary extensions so that your customers are unlikely to
> change to another implementation.

Its unlikely for a user to which only when the extensions are taken
advantage of.  Windows users have the same problem when they go beyond
ANSI C.  Its not really a problem either because the CL
implementations are differentiated by their extensions- they're one of
the reasons why a particular implementations is chosen.  If
portability is of foremost concern, then getting stuck to an
implementation because extensions are used is your own fault.

Gregm
From: Matthias
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <bf3p1r$6hk$1@trumpet.uni-mannheim.de>
Greg Menke wrote:

> Then don't use the manual.  Write code and learn the language instead.
> If your intellectual comfort level is exceeded by having to learn more
> than x functions, then by all means stop learning there and only use
> them.  OTOH, I'm exceedingly glad there is so much language headroom
> with CL because it also means you can find a concise way of doing so
> many little tasks that otherwise must be handled manually.

"You only need to learn parts of X, you can ignore the stuff not relevant to 
you." is an argument often heard for defending many programming languages 
(not just CL).  I think its wrong:  Unless you are working alone and never 
look at other peoples' code you are forced to learn more or less the whole 
language rather quickly (because other people will use subsets different 
from yours).  

The exception being when some obscure features are rarely used in a 
community at all (C++ template metaprogramming features once belonged into 
this class, but this is changing), or when defined subsets of a language 
exist (some sort of "core", extended by some sort of "libraries") and user 
communities agree on using some of these subsets only.
From: Greg Menke
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <m3y8yylazg.fsf@europa.pienet>
Matthias <····@yourself.pl> writes:

> Greg Menke wrote:
> 
> > Then don't use the manual.  Write code and learn the language instead.
> > If your intellectual comfort level is exceeded by having to learn more
> > than x functions, then by all means stop learning there and only use
> > them.  OTOH, I'm exceedingly glad there is so much language headroom
> > with CL because it also means you can find a concise way of doing so
> > many little tasks that otherwise must be handled manually.
> 
> "You only need to learn parts of X, you can ignore the stuff not relevant to 
> you." is an argument often heard for defending many programming languages 
> (not just CL).  I think its wrong:  Unless you are working alone and never 
> look at other peoples' code you are forced to learn more or less the whole 
> language rather quickly (because other people will use subsets different 
> from yours).  
> 
> The exception being when some obscure features are rarely used in a 
> community at all (C++ template metaprogramming features once belonged into 
> this class, but this is changing), or when defined subsets of a language 
> exist (some sort of "core", extended by some sort of "libraries") and user 
> communities agree on using some of these subsets only.

I'm not suggesting that people limit themselves, I think its important
to keep working at the upper limits of understanding and to me that
means regularly downloading and digesting additional details about
Common Lisp.  My argument is with the assertion that the # of
functions = complexity- I think its oversimplistic and ignores other
vitally important aspects of Common Lisp itself.  On the other hand,
if the OP feels there are "too many functions", then of course he's
entirely free to limit himself- but its hardly an objective criticism.

Gregm
From: Will Hartung
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <bf4pu0$avlst$1@ID-197644.news.uni-berlin.de>
I have to laugh when people say "Lisp is too big" and then complain that it
lacks a GUI. Make up your mind.

Let's look at another small language....Java.

Java is a small language that happens to be ENORMOUS today.

Knowing "Java" today means not knowing the few keywords and syntax elements
of Java, it means understanding the syntax, the run time environment, the
build environment, the "standard class libraries" and, minimally, one of
several sub categories (web apps, EJB, Swing GUIs, micro Java, etc).

How "small" is Java when you're using XDoclet on top of AOP on top of core
Java? Gets pretty nasty doesn't it?

Java now has FOUR different GUIs of assorted age and popularity: AWT, Swing,
IBMs (whatever Eclipse uses -- SWT?), and the mini gui for Micro Java. And
everyone who uses some Tool to build their GUI has a different way of
approaching how the GUI is built and implemented.


On the other side of the coin, we have this other nice small AND "STANDARD"
.Net system from Microsoft. A generic, algol-esque Java 2.0 language, where
they no doubt only spent a man year on the compiler.

This system was submitted to ECMA and now, I think ISO, for standardization.

And what was the first thing folks IMMEDIATELY complained about?? Where's
the GUI code, the SQL code, the ASP code? How come THEY'RE not in the
standard??

"Why isn't this bigger" they cried. "We have work to do, we can't use
this..."

Scheme may be popular, but there certainly isn't a lot (that I know of) of
large complicated code bases or systems that are actually source code
portable across Scheme implementations. For such a small language, an awful
lot of people seem to do an awful lot of stuff just a wee bit differently.

CL has ported large code sets for a long time. Folks write HUGE systems in
CL. Huge systems are hard to manage and maintain.

But the larger the language, the smaller the system, so imagine if a system
is huge in CL, what a vast monster it must be in something else.

Also, as an aside, if you haven't looked at it, take a glance at the Oracle
RDBMS someday and what a VAST array of services it supplies. Folks who use
those services don't seem to complain that Oracle is too big, or that it was
difficult to implement. Just a data point. Folks who don't use these
services certainly shout off, but not those that do...

Regards,

Will Hartung
(·····@msoft.com)
From: Kent M Pitman
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <sfwn0faa5gw.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

"Will Hartung" <·····@msoft.com> writes:

> I have to laugh when people say "Lisp is too big" and then complain that it
> lacks a GUI. Make up your mind.

There is some additional defense of the obvious questions like this at
 http://www.nhplace.com/kent/PS/dpANS.html
From: Don Geddis
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <87k7aiibij.fsf@sidious.geddis.org>
·········@gmx.net (thelifter) writes:
> "1. Intellectual Conciseness. A language should be small enough for a
> programmer to be able keep all of it in mind when programming.

Humans are capable of remembering a huge amount of information.  Natural
languages are much more complex than any artificial language.  Yet native
speakers seem to do just fine.

This seems like a silly (top-level) goal.

> This can be achieved through regularity in the design and a disciplined
> approach to the introduction of primitives and new programming concepts.

This part makes sense, though.  If you're going to have a certain level of
complexity, it definitely is easier on everyone if the design is regular
and modularized.

But that a very different goal from simply being "small".  Instead, the goal
is "as simple as possible, but without sacrificing power".  Or: "no
gratuitous complexity."

> Although having a simple language certainly contributes to the ease of
> learning. A 1000 page manual somehow is discouraging.

Common Lisp is designed to be highly efficient for expert programmer to use.
Easy of learning is secondary.  (Even that can be dealt with by a properly
written tutorial.  Nobody is stopping an instructor from teaching a simple
subset of CL.)

If ease of learning is your primary goal, why don't you choose a language
(like Scheme) that was designed with that in mind?

> "2. Compiler Simplicity. It should be possible to write a compiler which
> produces very efficient code

Sounds reasonable.  Fortunately, Common Lisp has proven to be amenable
to many good compilers which are capable of producing "very efficient code".

> and it should not be too great an undertaking to write such a compiler
> perhaps a man-year at the outside."

This, on the other hand, sounds like a silly goal.  What is the point in
designing a language for easy in implementing compilers?  Common Lisp had
much better goals: empower the end programmer.  If it takes more effort for
the handful of implementors to do something, but thousands of programmers are
thus empowered, that sure sounds like a good tradeoff.

> I think Lisp fails here. How long does it take to write a good CL
> compiler/interpreter/system? How long does it take for Scheme?

Who cares?  Why is this goal worthwhile?

> 1) Lisp is too large, it cannot be implemented very efficiently

You are completely in error on this point.

For example, take a look at CMUCL, which began as a CMU research project
to explore this very issue.  They produced a compiler which proved conclusively
that CL could be implemented efficiently.

> or better: it can't be implemented easyly.

This goal is often in conflict with much more important goals (such as the
power of the language from a programmer perspective).  Nothing comes for free.
If you want ease of implementation, choose Scheme.  If you're a programmer
that needs to get a job done, choose Common Lisp.

> The ideal implementation
> should have all functionality that a user could possibly need. The
> drawback is that such an implementation would never be finished.

Fortunately, the Common Lisp spec didn't cross this threshold.  Plenty
of CL implementations in fact did get finished.

> For example why does a Hashtable need to be part of the language
> definition. No other language I know has it, everywhere Hashtables are
> provided as additional packages.

The issue is, can a portable program rely on the functionality being present
in the language?  If yes, then every implementation must provide it somehow.
"Packages" (in a CL context, perhaps "modules" would be a better word) seems
like an implementation detail.

> If you put most stuff into packages, if the user doesn't need a
> hashtable, he can work with a smaller language. Just add what you need.

But what if he does need a hashtable?  Must the implementation provide it?
Did the language spec nail down exactly how it must work?  If so, how is this
really different in practice from the large CL spec?

> Inspite of being so large, Lisp doesn't have a standard way of doing
> GUIs, so usually most commercial Lisps have their own way of doing so,
> and this generates incompatibilities. Wouldn't it be easier to have
> this in a package, that would be implemented to run on any Lisp?

You seem to be confusing specification with implementation.  If you want it
the same across all implementations, then it needs to be in the spec.  Don't
tell me you're now suddenly arguing that the CL spec is too small, and needs
to be expanded to cover GUIs also!

> The bigger the core language the more possibilities for
> incompatibilities across implementations.

Do you have any evidence that this theoretical danger is true in practice?

> The practical side shows that Scheme seems to get it right. AFAIK
> there are much more Scheme implementations than CL implementations,
> and it is very easy to make one if it should be needed.

True, but so what?  Are there not enough CL implementations?  What user
benefit arises because of the large volume of different Scheme implementations?

> Once you have a Scheme you can use all the packages that conform to the
> standard. So in principle you should have the power of CL but with much
> more simplicity in the core language.

You seem to be suggesting some kind of small, core CL, with a large set of
public domain (?) modules written in this core, and available to all
implementations.  The API of both the core and the modules would need to be
fixed by the spec, and someone would have to write and continue to provide
the module code so that it conforms to the spec.

That's not a bad idea.  Common Lisp didn't happen to work out that way,
although you can see piece of the idea with the LOOP macro, PCL (CLOS),
and perhaps DEFSYSTEM.  The ones that made it into the ANSI spec (the first
two) generally got incorporated directly into future implementations.
With the ones that weren't specified in the spec, multiple incompatible
implementations are available (like DEFSYSTEM/ASDF).

But the spec is the key.  Scheme only specifies a tiny language, so it gets
extended in incompatible ways for the same functionality.  Hence a programmer
can't rely on much in a conforming program.

> How many CLs that work with Java are there? AFAIK not one, and this is
> not likely to change, because CL is so HUGE.

Work how?  Most of the major CL implementations have hooks into foreign
function calls or other OS connections.  What exactly do you want to have
happen between CL and Java?  It's likely that the current implementations
already do it.

> I think this also shows that the BIG language core is indeed a
> disadvantage not only for the implementor but also for the user. BIG
> core means less implementations

True.

> which means less programs, packages written for it

False.  There's only one implementation of Perl.  Doesn't seem to have
stopped the number of programs or packages.

The number of implementations is far, far down on the list of why people
choose to program in a particular language.

> Viewing from the point of view of the implementor a HUGE language core
> has its advantages of course: once you have written an implementation
> any startup will have to work very hard to catch up. Meanwhile you can
> add proprietary extensions so that your customers are unlikely to
> change to another implementation.

The ANSI CL spec is pretty clear.  As long as the implementation conforms,
customers can certainly choose whether to use proprietary extensions or not.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
Higher beings from outer space may not want to tell us the secrets of life,
because we're not ready.  But maybe they'll change their tune after a little
torture.  -- Deep Thoughts, by Jack Handey
From: Anton van Straaten
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <JAiRa.104711$Io.8967229@newsread2.prod.itd.earthlink.net>
Don Geddis wrote:
> ·········@gmx.net (thelifter) writes:
> > "1. Intellectual Conciseness. A language should be small enough for a
> > programmer to be able keep all of it in mind when programming.
>
> Humans are capable of remembering a huge amount of information.  Natural
> languages are much more complex than any artificial language.  Yet native
> speakers seem to do just fine.
>
> This seems like a silly (top-level) goal.

Tangential to your main points, with which I largely agree, this natural
language analogy doesn't make the case that small programming languages
aren't desirable.  Speakers of natural languages don't have to conform to
rules as strictly as authors of code, because the audience for natural
language is capable of performing an enormous amount of error correction,
inference, etc. etc.

If one is forced to speak incredibly precisely, where the slightest
ambiguity or deviation from an exacting set of rules is an error, then being
able to "keep all of it in mind" could be a big benefit, in terms of
minimizing errors and maximizing programmer efficiency.

But "keeping all of [a language] in mind" presumably applies to the language
syntax and semantics, not its entire set of libraries.  If it were the
latter, no useful modern language would qualify.

Anton
From: Barry Margolin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <gIiRa.230$0z4.92@news.level3.com>
In article <··············@sidious.geddis.org>,
Don Geddis  <···@geddis.org> wrote:
>·········@gmx.net (thelifter) writes:
>> "1. Intellectual Conciseness. A language should be small enough for a
>> programmer to be able keep all of it in mind when programming.
>
>Humans are capable of remembering a huge amount of information.  Natural
>languages are much more complex than any artificial language.  Yet native
>speakers seem to do just fine.

But natural language doesn't require as much precision as programming
languages do. U can b very sloppyin you're use of natural language and get
away with it -- the reader/listener can figure out what you're saying.

Since computers are much less forgiving, you have to remember lots more
details.  Human memory is good for concepts, but not so great at
remembering lots of random details.  If you can arrange for most things to
follow consistent patterns, we can learn those patterns readily.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Kenny Tilton
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F15BF1A.6000907@nyc.rr.com>
Barry Margolin wrote:
> In article <··············@sidious.geddis.org>,
> Don Geddis  <···@geddis.org> wrote:
> 
>>·········@gmx.net (thelifter) writes:
>>
>>>"1. Intellectual Conciseness. A language should be small enough for a
>>>programmer to be able keep all of it in mind when programming.
>>
>>Humans are capable of remembering a huge amount of information.  Natural
>>languages are much more complex than any artificial language.  Yet native
>>speakers seem to do just fine.
> 
> 
> But natural language doesn't require as much precision as programming
> languages do. U can b very sloppyin you're use of natural language and get
> away with it -- the reader/listener can figure out what you're saying.
> 
> Since computers are much less forgiving, you have to remember lots more
> details.  Human memory is good for concepts, but not so great at
> remembering lots of random details.  

c'mon! what is the size of your NL vocabulary compared to CL? besides, 
natural language does not offer word completion or heads-up display of 
function args after I have typed the function name, or apropos output, 
or the hyperspec or other on-line doc.

Is someone seriously suggesting a person using CL regularly ends up 
reinventing function X because they /forgot it was there/????

besides, the premise is silly. OK, I have my nice little CL Lite. Now I 
have to go out and pull off the shelf a hash library and an OO library 
and whatever else Gabriel wanted left out. Now miraculously they are all 
easier for me to remember? And now it is easier for someone else to read 
my code, because they invariably in the past had pulled the same add-ons 
off the shelf???? chya.

what exactly is the point of this thread? That CL should be abandoned 
because it is too hard to implement? What do we do with all the CL 
implementations? Can I have them?

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Anton van Straaten
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <NspRa.105484$Io.9024583@newsread2.prod.itd.earthlink.net>
Kenny Tilton wrote:
> Barry Margolin wrote:
> > In article <··············@sidious.geddis.org>,
> > Don Geddis  <···@geddis.org> wrote:
> >
> >>·········@gmx.net (thelifter) writes:
> >>
> >>>"1. Intellectual Conciseness. A language should be small enough for a
> >>>programmer to be able keep all of it in mind when programming.
> >>
> >>Humans are capable of remembering a huge amount of information.  Natural
> >>languages are much more complex than any artificial language.  Yet
native
> >>speakers seem to do just fine.
> >
> >
> > But natural language doesn't require as much precision as programming
> > languages do. U can b very sloppyin you're use of natural language and
get
> > away with it -- the reader/listener can figure out what you're saying.
> >
> > Since computers are much less forgiving, you have to remember lots more
> > details.  Human memory is good for concepts, but not so great at
> > remembering lots of random details.
>
> c'mon! what is the size of your NL vocabulary compared to CL? besides,
> natural language does not offer word completion or heads-up display of
> function args after I have typed the function name, or apropos output,
> or the hyperspec or other on-line doc.
>
> Is someone seriously suggesting a person using CL regularly ends up
> reinventing function X because they /forgot it was there/????

Notice that the message you responded to didn't even mention CL (except in
the inherited subject line).  I responded to the same point that Barry did,
which is that the claim "Natural languages are much more complex than any
artificial language. Yet native speakers seem to do just fine" isn't a good
analogy for the programming language case, for the reasons Barry and I gave.

If you put the original quote in context: "A language should be small enough
for a programmer to be able keep all of it in mind when programming.  This
can be achieved through regularity in the design and a disciplined approach
to the introduction of primitives and new programming concepts", I think it
has a valid point.  The second sentence seems to indicate that the first
sentence doesn't mean that you have to literally be able to memorize every
feature of the language and every library function.  Rather, a consistent
syntax and semantics, combined with well-organized libraries, can make a
language easier to manage.  That applies to most Lisps, to a greater degree
than many other languages.  I've worked with more than one language (not
Lisps) that *doesn't* qualify in these respects.

I interpret "keep all of it in mind" to mean something like "allows you to
create useful, concise mental models."  Without a regular design, you have
to memorize lots of ad-hoc details, and there are no unifying principles to
help you do that.

Even the online docs and help don't eliminate this issue.  If you're
constantly forced to refer to the docs because you can't remember something,
that's a productivity killer.  The goal is to be able to fill your your
brain - your level 1 cache - with info that'll give you the most leverage.
The info that best qualifies is not a collection of ad-hoc details, but a
set of organizing principles from which you can infer and predict how things
should work, or where you can find them, even if you need to look up some
details.

> besides, the premise is silly. OK, I have my nice little CL Lite. Now I
> have to go out and pull off the shelf a hash library and an OO library
> and whatever else Gabriel wanted left out. Now miraculously they are all
> easier for me to remember? And now it is easier for someone else to read
> my code, because they invariably in the past had pulled the same add-ons
> off the shelf???? chya.

The same point applies to libraries: a good library design should allow you
to create a mental model which helps you find things in the library, so what
we're discussing is one of the ways to address the library reuse problem.

> what exactly is the point of this thread? That CL should be abandoned
> because it is too hard to implement?

*My* point was that weak analogies can't be used to defend good languages.

> What do we do with all the CL implementations? Can I have them?

Fine by me, but you're going to need a bigger apartment!

Anton
From: David Steuber
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <87el0p1z6j.fsf@verizon.net>
Kenny Tilton <·······@nyc.rr.com> writes:

> what exactly is the point of this thread? That CL should be abandoned
> because it is too hard to implement? What do we do with all the CL
> implementations? Can I have them?

If you don't tell anyone, I'll give you a copy of CMUCL.

I see there have been some accounting tricks pulled in this thread.
Mainly, where does the complexity go?  You can do it once for the
compiler, or over and over again for each program that needs it.

The C programming language is very small.  It has a tiny vocabulary.
Now through in all the libraries that make it useful.  That is a lot
of stuff.  Oh, and there is no standard GUI!  Nuts!

What is the standard GUI for Scheme, btw?  I don't recall one
mentioned in the little schemer.  I suppose we can be thankful that
Java has a standard GUI.

Also, Perl has native hashtables.  It also has multiple
implementations.  At least two anyway.  That doesn't count simple
porting.  If you count the ports, you will see why the C code base is
so fricken complex.

Meanwhile, in CL, it only takes a few hundred lines to do an embedded
Prolog implementation.

-- 
One Editor to rule them all.  One Editor to find them,
One Editor to bring them all and in the darkness bind them.

(do ((a 1 b) (b 1 (+ a b))) (nil a) (print a))
From: Prof Ric Crabbe
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <wzcd6g92hxc.fsf@crab.cs.usna.edu>
[comp.lang.scheme deleted from Newsgroups: becasue, why should they care?]

Kenny Tilton <·······@nyc.rr.com> writes:
> 
> Is someone seriously suggesting a person using CL regularly ends up
> reinventing function X because they /forgot it was there/????
> 

I pretty much agree with the point of your article, but to this one
question I must reply, "Yes, all the time."  Just this Monday I
re-wrote reduce because I forgot it was there.  Only when I looked up
map in CLtL did I notice reduce and thwack my forehead.  Perhaps its
because I'm an idiot, but this does happen to me.

A related issue to the size of CL is that there all these functions in
there I *still* don't know.  I had been programming in CL daily for 5
years before I even discovered reduce.  How many people out there know
notany?  Sometimes when I'm bored, I flip through CLtL scanning for a
new function I didn't know about.  Just now I came across
set-exclusive-or.  Not sure I've ever seen that before.

Of course, none of this is an argument against a big language, really,
except if you hate doubt.  If I'd I used C, I'd *know* it wasn't
there.


fwiw,
ric

Disclaimer: I may in fact, be an idiot.
From: Don Geddis
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <871xwostkv.fsf@sidious.geddis.org>
> Kenny Tilton <·······@nyc.rr.com> writes:
> > Is someone seriously suggesting a person using CL regularly ends up
> > reinventing function X because they /forgot it was there/????

Prof Ric Crabbe <······@usna.edu> writes:
> Just this Monday I re-wrote reduce because I forgot it was there.

I sometimes have the reverse experience.  I'm about to implement some simple,
very general piece of code, and I think to myself, "surely other people
need this too...what are the chances that the CL designers already specified
it?"  And I spend a short time searching for a built-in function to accomplish
my need.

More often than not, it's already there somewhere.  Sometimes not: I've wanted
something like SPLIT-SEQUENCE:
        http://www.cliki.net/SPLIT-SEQUENCE
which could easily have been in ANSI CL, but happened not to be.  Or regular
expressions.

But other times, I design my own interface in my head, am just about to
implement it, and suddenly discover that the CL designers already
anticipated my need and provided the function built in.  Often with a
better or more general design than I was originally thinking.  I came
across #'IDENTITY this way.  Probably saw it when I first learned the
language, and thought "who would ever want this?" and promptly forgot it.
Years later, I was in a situation where I suddenly needed it (required to
pass a function as an argument, but I didn't want it to do anything), and
found it amusing to see the thing I needed right there, waiting for me all
this time.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
One way to make more money at your job is just to make everything an "extra."
For instance, showing up at work, that's extra; actually doing work, that's
extra; wearing clothes, that's extra.  See how it works?
	-- Deep Thoughts, by Jack Handey [1999]
From: Christopher C. Stacy
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <u65m181em.fsf@dtpq.com>
>>>>> On 17 Jul 2003 13:15:43 -0400, Prof Ric Crabbe ("Prof") writes:

 Prof> [comp.lang.scheme deleted from Newsgroups: becasue, why should they care?]

 Prof> Kenny Tilton <·······@nyc.rr.com> writes:
 >> 
 >> Is someone seriously suggesting a person using CL regularly ends up
 >> reinventing function X because they /forgot it was there/????
 >> 

 Prof> I pretty much agree with the point of your article, but to this one
 Prof> question I must reply, "Yes, all the time."  Just this Monday I
 Prof> re-wrote reduce because I forgot it was there.  Only when I looked up
 Prof> map in CLtL did I notice reduce and thwack my forehead.  Perhaps its
 Prof> because I'm an idiot, but this does happen to me.

 Prof> A related issue to the size of CL is that there all these functions in
 Prof> there I *still* don't know.  I had been programming in CL daily for 5
 Prof> years before I even discovered reduce.  How many people out there know
 Prof> notany?  Sometimes when I'm bored, I flip through CLtL scanning for a
 Prof> new function I didn't know about.  Just now I came across
 Prof> set-exclusive-or.  Not sure I've ever seen that before.

 Prof> Of course, none of this is an argument against a big language, really,
 Prof> except if you hate doubt.  If I'd I used C, I'd *know* it wasn't
 Prof> there.

What this illustrates is that you can program very effectively in
Common Lisp without knowing the entire language.
From: Joe Marshall
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <adbdhvb8.fsf@ccs.neu.edu>
Kenny Tilton <·······@nyc.rr.com> writes:

> Is someone seriously suggesting a person using CL regularly ends up
> reinventing function X because they /forgot it was there/????

Go through someone else's CL code and look at how many places they
re-invented SOME, EVERY, NOTANY, NOTEVERY, COUNT-IF, CONSTANTLY,
ETYPECASE, REDUCE, FIND, POSITION, MISMATCH, and CHECK-TYPE.

It happens all the time.
From: Barry Margolin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <y2CRa.250$0z4.186@news.level3.com>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>Kenny Tilton <·······@nyc.rr.com> writes:
>
>> Is someone seriously suggesting a person using CL regularly ends up
>> reinventing function X because they /forgot it was there/????
>
>Go through someone else's CL code and look at how many places they
>re-invented SOME, EVERY, NOTANY, NOTEVERY, COUNT-IF, CONSTANTLY,
>ETYPECASE, REDUCE, FIND, POSITION, MISMATCH, and CHECK-TYPE.
>
>It happens all the time.

Which is better: a language that has so many advanced features that you
don't remember them all and occasionally reimplement some, or a language
that has so few advanced features that you *have* to implement them
yourself?

I'm reminded of when I was fooling around with Prolog many years ago.  I
noticed that just about every program posted to comp.lang.prolog seemed to
start with a handful of definitions of simple predicates like "member".
Each one is only 2 or 3 lines, so it's not a big deal, but it seemed
wasteful that every programmer went through this same silly process.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Joe Marshall
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <smp5gfm7.fsf@ccs.neu.edu>
Barry Margolin <··············@level3.com> writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
> >Kenny Tilton <·······@nyc.rr.com> writes:
> >
> >> Is someone seriously suggesting a person using CL regularly ends up
> >> reinventing function X because they /forgot it was there/????
> >
> >Go through someone else's CL code and look at how many places they
> >re-invented SOME, EVERY, NOTANY, NOTEVERY, COUNT-IF, CONSTANTLY,
> >ETYPECASE, REDUCE, FIND, POSITION, MISMATCH, and CHECK-TYPE.
> >
> >It happens all the time.
> 
> Which is better: a language that has so many advanced features that you
> don't remember them all and occasionally reimplement some, or a language
> that has so few advanced features that you *have* to implement them
> yourself?

I'd like to see an agent of some sort that said ``hey, why didn't you
just use REDUCE here?'' so I can be even lazier than I am now.
From: Barry Margolin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <xfCRa.253$0z4.131@news.level3.com>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>I'd like to see an agent of some sort that said ``hey, why didn't you
>just use REDUCE here?'' so I can be even lazier than I am now.

Hey Kent, weren't you involved in an AI-based programming assistant when we
were at MIT?  Did it do this kind of thing?

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Kent M Pitman
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <sfwbrvqa4o1.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Barry Margolin <··············@level3.com> writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
> >I'd like to see an agent of some sort that said ``hey, why didn't you
> >just use REDUCE here?'' so I can be even lazier than I am now.
> 
> Hey Kent, weren't you involved in an AI-based programming assistant when we
> were at MIT?  Did it do this kind of thing?

The Programmer's Apprentice was a large project in the area of automatic 
program creation and modification.  Synthesizing new programs is much harder
than recognizing existing projects.  Building on a Masters thesis (I think
it was) by Dick Waters and continuing onward from there, the foundation was
laid for recognizing such idioms; however, as I recall, the limiting problem
was in the realm of graph matching, which turns out to be the time-limiting 
issue in working backwards from program to plan.  My recollection is that
if you're not careful it's very, very bad complexity.
I don't think you'll get commercial products any time soon.
From: Matthias
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <bf8a3f$f93$1@trumpet.uni-mannheim.de>
Barry Margolin wrote:
> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu>
> wrote:
>>Go through someone else's CL code and look at how many places they
>>re-invented SOME, EVERY, NOTANY, NOTEVERY, COUNT-IF, CONSTANTLY,
>>ETYPECASE, REDUCE, FIND, POSITION, MISMATCH, and CHECK-TYPE.
>>
>>It happens all the time.
> 
> Which is better: a language that has so many advanced features that you
> don't remember them all and occasionally reimplement some, or a language
> that has so few advanced features that you *have* to implement them
> yourself?

Of course the first language is the better of the two.  But _best_ would be 
a language with advanced features which actually are organized in a 
consistent way, thus easier to remember.

But Common Lisp is a common lisp and was designed to be easy to use for 
people who already knew some other dialect.  Backwards-compartibility was 
more important than consistency or ease of use.  At the time CL was 
designed this probably was the right decision to make. (Altough sometimes I 
wonder how difficult it might have been to define a beautiful and more 
consistent lisp and automatically translate old sourcecode to it.)
From: Paolo Amoroso
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <Zv8XP8lkaSfO0BurzjZpbYavi8Ol@4ax.com>
On Fri, 18 Jul 2003 10:09:49 +0200, Matthias <····@yourself.pl> wrote:

> designed this probably was the right decision to make. (Altough sometimes I 
> wonder how difficult it might have been to define a beautiful and more 
> consistent lisp and automatically translate old sourcecode to it.)

ISLISP might be such a design:

  http://islisp.info


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Andrew Philpot
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <slrnbhgeut.274.philpot@blombos.isi.edu>
In article <············@trumpet.uni-mannheim.de>, Matthias wrote:
> (Altough sometimes I 
> wonder how difficult it might have been to define a beautiful and more 
> consistent lisp and automatically translate old sourcecode to it.)

Kimball Collins and I toyed with this idea awhile back.  I still
tinker with the code base (working title: "Ortholisp") from time to
time.  The scope of the effort is illustrated by our rule of thumb,
which was basically to allow any "clean-up" that could be accomplished
using macroexpansion and/or compiler macroexpansion, but not to
require full-on code walking.  So you have things like MAKUNBOUND ->
MAKE-UNBOUND, NCONC -> NAPPEND, some CLOSification of simple
operations that seem very generic; some attempt to unify LET etc. and
MULTIPLE-VALUE-BIND, but nothing very deep.

Don't get me wrong - I'm not sure I would ever actually program in
Ortholisp per se, but isn't it nice to have a language you can
feasibly perform an Orwellian transform on?

Andrew
From: Kenny Tilton
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F16F604.8080407@nyc.rr.com>
Joe Marshall wrote:
> Kenny Tilton <·······@nyc.rr.com> writes:
> 
> 
>>Is someone seriously suggesting a person using CL regularly ends up
>>reinventing function X because they /forgot it was there/????
> 
> 
> Go through someone else's CL code and look at how many places they
> re-invented SOME, EVERY, NOTANY, NOTEVERY, COUNT-IF, CONSTANTLY,
> ETYPECASE, REDUCE, FIND, POSITION, MISMATCH, and CHECK-TYPE.
> 
> It happens all the time.
> 

Ah, but is that because they forgot (as I marveled) or because they 
never knew?

REDUCE, btw, I could see someone forgetting. Maybe because it does not 
come up that often. Hell, I sometimes forget my own stuff if it gets 
used too rarely. I can always look at my little utilities files and 
discover functions I thought were good ideas but then never got used 
enoguh to take root.

as for reinventing stuff we never knew of, yup. in the early days, 
coming from C and Basic, I simply did not expect so much built-in 
functionality. by mid-CL-life i had promised myself to 
lookup-before-reinventing, but no good: CL is so damn easy my fingers 
would toss off SOME before my Good Intentions could get me to turn from 
the task at hand to a reference. and now I know I know every CL function 
(not!) so why look?

btw, the Saddamite center has acknowledged the refrees kickoff 
instruction and charged straight for the ball, but stalled before 
kicking off. TeamKenny (ya gotta know yer SouthPark movie) waits 
patiently while tech support ponders the kickable-margin parameter.

RoboCup is a hoot. I plan to share a base client with anyone that wants 
it. Fair warning: surprise, surprise, it uses Cells.

Hmm, I wonder what a Scheme version of Cells would be like... oh, you 
guys don't have an object system, right? I'd have to implement Cells for 
all the popular ones?

:)

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Anton van Straaten
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <MMDRa.106435$Io.9111041@newsread2.prod.itd.earthlink.net>
> RoboCup is a hoot. I plan to share a base client with anyone that wants
> it. Fair warning: surprise, surprise, it uses Cells.
>
> Hmm, I wonder what a Scheme version of Cells would be like... oh, you
> guys don't have an object system, right? I'd have to implement Cells for
> all the popular ones?
>
> :)

Nah, you would use a functional implementation of cells, the way you would
have in the first place, if you hadn't been sucked in by all that excess
functionality in CL...  ;)

Cells *should* be "purely functional slots" (to borrow your words :), which
contain promises to automatically cache the results of calculations, and are
evaluated lazily as needed by other calculations, so the dependencies take
care of themselves.

Anton
From: Kenny Tilton
Subject: A pox on laziness! [was Re: A Critique of Common Lisp]
Date: 
Message-ID: <3F172768.9060504@nyc.rr.com>
Anton van Straaten wrote:
>>RoboCup is a hoot. I plan to share a base client with anyone that wants
>>it. Fair warning: surprise, surprise, it uses Cells.
>>
>>Hmm, I wonder what a Scheme version of Cells would be like... oh, you
>>guys don't have an object system, right? I'd have to implement Cells for
>>all the popular ones?
>>
>>:)
> 
> 
> Nah, you would use a functional implementation of cells, the way you would
> have in the first place, if you hadn't been sucked in by all that excess
> functionality in CL...  ;)
> 
> Cells *should* be "purely functional slots" (to borrow your words :), 

of an instance? of a class? OO?

 >... which
> contain promises to automatically cache the results of calculations, and are
> evaluated lazily as needed by other calculations, so the dependencies take
> care of themselves.

cacheing + lazy = needless re-calculation.

a depends on b

b depends on c

c changes in such a way that b will recalculate the same result, so A is 
still valid.

but we are lazy, so we just mark A as well as B obsolete, forcing a 
needless recalculation of A.

oh, i get it, lazy refers to the programmer who could not work out how 
to track dependencies, not the cpu getting a break. :)

besides, without eager evaluation you do not have dataflow as your 
paradigm. we need to get the programmer's hands off the controls at 
runtime, and produce instead an efficient, declarative paradigm like 
dataflow.

dataflow lets us build not just models but /working models/. eager 
evaluation is just a negative way of characterizing cause and effect. 
cells introduce causation as a reliable friend of the programmer, who 
now just declares the causal dependencies, links the model to an event 
stream, then sits back in astonishment that it all Just Works.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Anton van Straaten
Subject: Re: A pox on laziness! [was Re: A Critique of Common Lisp]
Date: 
Message-ID: <m7MRa.107385$Io.9175957@newsread2.prod.itd.earthlink.net>
> > Cells *should* be "purely functional slots" (to borrow your words :),
>
> of an instance? of a class? OO?

They're an abstraction, with an interface.  How that interface is
implemented isn't particularly important to the question of how the overall
evaluation is handled.  The cell abstraction could simply be a module that
exports a set of functions, and the internal implementation could be
anything.

Put it this way: you were the one boasting earlier about "the punch-card
feel of using car and cdr to code circles around 'modern' java", but now you
can't do without CLOS?  ;)

> cacheing + lazy = needless re-calculation.
>
> a depends on b
>
> b depends on c
>
> c changes in such a way that b will recalculate the same result, so A is
> still valid.
>
> but we are lazy, so we just mark A as well as B obsolete, forcing a
> needless recalculation of A.

I think that's orthogonal to whether lazy evaluation is used.  To support
that functionality, some machinery is going to be needed either way.  A
dataflow mechanism can be implemented lazily, although that's not what I was
suggesting in my previous post.

> oh, i get it, lazy refers to the programmer who could not work out how
> to track dependencies, not the cpu getting a break. :)

Like I said, it should be possible to satisfy the requirements either way,
but programmer laziness is definitely a consideration in my preference here.
I'd first need to be convinced that this optimization is necessary for the
system in question, before implementing an infrastructure to handle it.

I looked around for info on this subject, and found a paper entitled
"Lessons Learned About One-Way, Dataflow Constraints in the Garnet and
Amulet Graphical Toolkits":
http://www-2.cs.cmu.edu/afs/cs/project/amulet/www/papers/toplas-constraint-e
xperience.pdf

Some of the results described in this paper seem reasonably general.  Here
are some relevant quotes, talking about lessons learned with the dataflow
constraint solvers in Garnet and Amulet (Garnet's C++ equivalent), taken
from the beginning of section 5:

*  "It is not important to avoid unnecessary evaluations in graphical
interfaces."  In 5.3, they cite a 1% difference in application performance
due to unnecessary evaluations.

*  "Lazy evaluation performs better than eager evaluation, but generally not
by much."

In 5.1, there's a description of some issues with premature evaluation and
lack of evaluation, with eager and lazy strategies respectively, due to
interactions with the programmer's settings and code, and the evaluation
mechanism.  The details are in the paper.  Here's a quote from the summary:

  "The problem with premature evaluation in Amulet and lack of evaluation in
Garnet reveals that both eager and lazy evaluation still have problems that
need to be resolved. In our implementations, the premature evaluation
associated with eager evaluation was far more problematic for users than the
occasional lack of evaluation caused by lazy evaluation."

I don't know how fundamental these evaluation problems were, or if they were
just an artifact of these systems, but still, it's interesting that a lazy
approach gave a better result "automatically".

> besides, without eager evaluation you do not have dataflow as your
> paradigm.

Never say never...

> we need to get the programmer's hands off the controls at
> runtime, and produce instead an efficient, declarative paradigm like
> dataflow.

I'm all for declarative.  But the fact that it's declarative implies that
different execution models are possible.

> dataflow lets us build not just models but /working models/. eager
> evaluation is just a negative way of characterizing cause and effect.

Do you mean "negative" as a pejorative, or in some other way?  I'm not aware
of eager evaluation as being a negative characterization.  If anything, it's
laziness that has issues to overcome.  I'm not a proponent of laziness
everywhere - I just like it in some applications.

> cells introduce causation as a reliable friend of the programmer, who
> now just declares the causal dependencies, links the model to an event
> stream, then sits back in astonishment that it all Just Works.

Sounds good, although ideally, I'm wondering if the system can't do some of
the work of inferring causal dependencies, from the underlying computation.
Some of the continuation-based web apps can be seen as working this way.
I'm not sure how well this would translate to the more traditional GUI case,
though.

Anton
From: Kenny Tilton
Subject: Animation(tm)! [was Re: A pox on laziness! [was Re: A Critique of Common Lisp]]
Date: 
Message-ID: <3F1807B8.80608@nyc.rr.com>
Anton van Straaten wrote:
>>>Cells *should* be "purely functional slots" (to borrow your words :),
>>
>>of an instance? of a class? OO?
> 
> 
> They're an abstraction, with an interface.  How that interface is
 > implemented isn't particularly important...

Spoken like a true Ivory Tower, lost-in-the-clouds, Scheme-making 
academic. :) I am an apps guy, and my market is apps people. We want 
working code, not abstarctions. Hell, I won't even try pushing my 
working Cells implementation until I have a portable GUI to include, 
because people just want stuff they can use right off the shelf.


> Put it this way: you were the one boasting earlier about "the punch-card
> feel of using car and cdr to code circles around 'modern' java", but now you
> can't do without CLOS?  ;)

I just got them working with structure objects as well, does that count? 
:) But there /does/ have to be an accessor involved so I can hide the 
plumbing; I could do global variables, but they would require exposed 
plumbing.

I mean, the dataflow (as I conceive it (see below)) /does/ have to begin 
with a settable place, altho after reading your most excellent citation 
on Garnet and Amulet dataflow I see that I have to find a new one 
because the academics have destroyed another good word.

Has anybody taken "animation" as a word to describe a programming 
paradigm? i'll describe it in detail below.

> 
> 
>>cacheing + lazy = needless re-calculation.
>>
>>a depends on b
>>
>>b depends on c
>>
>>c changes in such a way that b will recalculate the same result, so A is
>>still valid.
>>
>>but we are lazy, so we just mark A as well as B obsolete, forcing a
>>needless recalculation of A.
> 
> 
> I think that's orthogonal to whether lazy evaluation is used.  To support
> that functionality, some machinery is going to be needed either way.  A
> dataflow mechanism can be implemented lazily, ...

well, as hinted above, there's "dataflow" and then there is dataflow, 
er, animation! see below for my differentiation of the two.

> Like I said, it should be possible to satisfy the requirements either way,
> but programmer laziness is definitely a consideration in my preference here.

Me, too. I meant the laziness of the folks doing the constraint 
implementation. But that was just word-play, I do not mean laziness, I 
mean poor engineering and failure of imagination. Maybe I mean laziness. 
  From what I see in the few academic papers I can even understand, 
these folks take an interesting idea, write crappy implementations 
(that's OK, I do that, too) but then instead of fixing the 
implementations they publish fancy papers about why the idea is no good.

> I looked around for info on this subject, and found a paper entitled
> "Lessons Learned About One-Way, Dataflow Constraints in the Garnet and
> Amulet Graphical Toolkits":
> http://www-2.cs.cmu.edu/afs/cs/project/amulet/www/papers/toplas-constraint-e
> xperience.pdf

Nice find! Thx.

> Some of the results described in this paper seem reasonably general.

Not so fast! Reading between the lines, none of those implementations 
work like Cells. Their results are not my results, and from what details 
they offer I can tell the results are (as you suspected) artifacts of 
their implementations.

Some of the problems, such as unitialized slots being encountered, 
cannot arise in the JIT approach of Cells. How they got into that 
particular bind I cannot imagine, but they must have done some pretty 
bad engineering. Their paper is not about dataflow or eager evaluation, 
it is about a few poor efforts at same.

<rant> Academics are pretty smart, much smarter than me. That can be a 
problem in engineering:

   http://www.generationterrorists.com/articles/broken_images.html

In my first few programming efforts I marshalled all my wits against 
every misbehavior thrown off by my models. That was exhausting. Now I 
refuse to work on any model that throws off misbehavior. Out it goes. 
But geniuses don't get exhausted, they enjoy getting the square peg into 
the round hole.

Then they write a paper denouncing pegs and holes!
</rant>

>>besides, without eager evaluation you do not have dataflow as your
>>paradigm.
> 
> 
> Never say never...

never! at least for what I call animation (ne dataflow). see below.

>>we need to get the programmer's hands off the controls at
>>runtime, and produce instead an efficient, declarative paradigm like
>>dataflow.
> 
> I'm all for declarative.  But the fact that it's declarative implies that
> different execution models are possible.

but not if the programmer ends up with any responsibility for making the 
dataflow happen, and below (!) I explain how something like Cells is the 
only way the programmer is truly programming declaratively.

>>dataflow lets us build not just models but /working models/. eager
>>evaluation is just a negative way of characterizing cause and effect.
> 
> 
> Do you mean "negative" as a pejorative, or in some other way?  I'm not aware
> of eager evaluation as being a negative characterization.  If anything, it's
> laziness that has issues to overcome.  I'm not a proponent of laziness
> everywhere - I just like it in some applications.

Yes, I meant pejorative. I have always heard eager dismissed as 
computationally too expensive, but then again I am going on a small 
sample, I don't get out much.

>>cells introduce causation as a reliable friend of the programmer, who
>>now just declares the causal dependencies, links the model to an event
>>stream, then sits back in astonishment that it all Just Works.
> 
> 
> Sounds good, although ideally, I'm wondering if the system can't do some of
> the work of inferring causal dependencies, from the underlying computation.

So you have not read the garbage on my web site! <g>

But more importantly, boy did I just misrepresent Cells. Rather 
astonishingly. No way I can even reconstruct how I got to that abysmal 
choice of words "declare the...dependencies".

Rest assured, Cells track dependencies completely transparently.

<<<< !!!! ---- Below ---- !!!! >>>>>

OK, regarding that paper, their dataflow is not my dataflow, their eager 
is not my eager.

Here is what I mean by dataflow:

In the OS event loop, events =flow=> input Cells, as in "inputs to the 
dataflow".

input Cells =flow=> dependent cells which in turn =flow=> other 
dependent cells. they "flow" by forcing dependent cells to recalculate 
immediately.

in optional, user-defined on-change callbacks, input or dependent cells 
=feed=> world outside the model, eg, by doing something like 
invalidating a portion of the screen to force a new OS event, a redraw. 
or by sending a message to a RoboCup server. [aside: this in fact 
happens /before/ the dependent cells get forced to recalculate.]

curiousity: on-change callbacks let input and dependent cells =flow=> 
input cells. but these feel like GOTOs, so I have trouble sleeping and 
try to avoid doing this.

now we loop back to the beginning, because, say, the new screen display 
motivates the user to click on something, or the robocup server sends an 
IPC reply.

Now the key here is that Cells feed the outside world when they change 
value when they get recomputed because their inputs have changed. so it 
is no good, say, just marking the OK button's 'enabled' slot as invalid 
when conditions on which that computations depend change. when those 
conditions change, the enabled slot must be recalculated so the enabled 
value will go to true and, since that slot describes a button, so the 
button rectangle of the screen will get invalidated so the OS will send 
us a paint message so the button will get redrawn as "enabled" so the 
user will know they can click it.

Now you /could/ just arbitrarily repaint the whole screen after each OS 
event just in case, but... well, we tried that just for laughs in the 
first week of development of Cells, even though we knew it would be too 
slow. it was, so by week two we were tracking dependencies intelligently.

When these guys talk about "dataflow", they mean "dataflow if the 
programmer takes other necessary steps to get the data to move along 
certain paths managed by our implementation". the application programmer 
still ends up making things happen by otherwise forcing the action such 
that the invalid stuff gets re-sampled finally recalculated.

All a Cells programmer does is:

1. handle OS events with code like:

    (defmethod os-callback-proc (window event)
       (setf (os-event window) event))

    where os-event is an input cell any dependent cell can watch.

2. write delarative rules

3. write on-change callbacks

    (def-c-echo enabled ((self Image))
        (os-invalidate-rect (window self) (global-rect self)))

Naturally, I have a Cells-powered GUI which covers all the normal 
application development stuff, but that's the idea.

So the key is that Cell applications /work/ thru dataflow. It is not a 
casually applied metaphor; data literally flows from the OS into the 
application model and back out via what I call echo functions called 
when a cell slot changes.

Evaluation /must/ be eager, or the application does not work.

Put another way, programmers are always building models, but with cells 
they are building /working/ models. (hence "animation" as my now 
trademarked paradigm name.) Inputs /cause/ an ineluctable, orderly 
cascade of state change throughout the model and into the outside world 
via screen invalidations or IPC messages or any API to a physical device.

And it is all hands-off from the perspective of the developer building 
the model.

Meanwhile, TeamKenny beat the Saddamites to the ball after the initial 
kickoff, but the kickoff was a real wallop so they had to run back 
towards their own end to get to the ball.

TK should have run around the ball before trying to kick it towards the 
Saddamite goal. :( The attempted -170 degree kick sailed the ball across 
the field and out of bounds, and none of the players was ready for the 
throw_in_left event.

Anyone know of an Arena Soccer Simulation League? I need walls. :(


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Matthias
Subject: Re: Animation(tm)! [was Re: A pox on laziness! [was Re: A Critique of Common Lisp]]
Date: 
Message-ID: <bfg7b7$8ev$1@trumpet.uni-mannheim.de>
Kenny Tilton wrote:

> I mean, the dataflow (as I conceive it (see below)) /does/ have to begin
> with a settable place, altho after reading your most excellent citation
> on Garnet and Amulet dataflow I see that I have to find a new one
> because the academics have destroyed another good word.
> 
> Has anybody taken "animation" as a word to describe a programming
> paradigm? i'll describe it in detail below.

I'm not 100% sure I understood your description "below" about 
Cells/dataflow/animation correctly.  But it might be related to the 
Observer pattern of the Gof-Book (Gamma et al., "Design Patterns", 
p.293ff):

"Define a one-to-many dependency between objects so that when one object 
changes state, all its dependents are notified and updated automatically."

If you chain the observers you get something most people would probably call 
a dataflow architecture.  If you look at the book chapter and goolge for 
"Observer pattern" you'll find some discussions on implementational issues 
and reference implementations as well.

Matthias
From: Kenny Tilton
Subject: Re: Animation(tm)! [was Re: A pox on laziness! [was Re: A Critique of Common Lisp]]
Date: 
Message-ID: <3F1BE9BD.2090508@nyc.rr.com>
Matthias wrote:
> Kenny Tilton wrote:
> 
> 
>>I mean, the dataflow (as I conceive it (see below)) /does/ have to begin
>>with a settable place, altho after reading your most excellent citation
>>on Garnet and Amulet dataflow I see that I have to find a new one
>>because the academics have destroyed another good word.
>>
>>Has anybody taken "animation" as a word to describe a programming
>>paradigm? i'll describe it in detail below.
> 
> 
> I'm not 100% sure I understood your description "below" about 
> Cells/dataflow/animation correctly.  But it might be related to the 
> Observer pattern of the Gof-Book (Gamma et al., "Design Patterns", 
> p.293ff):

i am familiar with that work, and i usually cite it as prior art. but 
look at all the wiring you have to do, and the granularity is at the 
object level. cells are almost perfectly transparent and work at the 
slot level.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: cr88192
Subject: Re: Animation(tm)! [was Re: A pox on laziness! [was Re: A Critique of Common Lisp]]
Date: 
Message-ID: <vic485c8c35va5@corp.supernews.com>
> [snip]

much of this has gotten me thinking, I may just rip you off.

since I have the liberty of my own interpreter, I can do this within the
interpreter, though it will likely be a little kludgy.

I will state my idea, pardon if I seriously missed things:

constraints would exist as fragments;
each fragment would be initially invalid;
once a fragment is computed, it tries to fetch the values of any fragments
it references;
if a fragment fetches a value from another fragment, the former is noted in
the latter to propagate invalidations;
if a fragment is invalidated, it invalidates any that reference it, unless
the recomputed value is the same as the previous value in which case it is
not necessary to propagate invalidations (I am unsure whether to take a lazy
or eager approach to recomputation).

now the kludgy bit:
I could hide this within the interpreter by making lookups, if they find
such a value, extract/recompute the value and return that instead;
all the inputs will also have to be fragments, though ones without any
computation or dependancies (and as a result are allways valid, but will
invalidate dependancies if modified...).

this is simple but unidirectional, bidirectional is far more work than I am
willing to do.

of course I may not implement this now, this, like predicate dispatch, may
be one of the features I never seem to get to.
otherwise I started documenting my core interpreter and doing other trivia
stuff (like optimizing my net code a little so that it is a tiny bit less
insanely slow...).

of course like allways this will probably be viewed as uninteresting and
likely ignored...
From: Kenny Tilton
Subject: Re: Animation(tm)! [was Re: A pox on laziness! [was Re: A Critique of Common Lisp]]
Date: 
Message-ID: <3F268978.7010203@nyc.rr.com>
cr88192 wrote:
>>[snip]
> 
> 
> much of this has gotten me thinking, I may just rip you off.
> 
> since I have the liberty of my own interpreter, I can do this within the
> interpreter, though it will likely be a little kludgy.
> 
> I will state my idea, pardon if I seriously missed things:
> 
> constraints would exist as fragments;
> each fragment would be initially invalid;
> once a fragment is computed, it tries to fetch the values of any fragments
> it references;
> if a fragment fetches a value from another fragment, the former is noted in
> the latter to propagate invalidations;

yes. unfortunately, this breaks GC, because when the app is thru with 
something it has to tell other things to forget about it explicitly or 
it just stays around forever (and being recalculated). So a fragment 
needs to know not just who uses it, but who it uses so it can shuffle 
off this mortal coile gracefully.

 > if a fragment is invalidated, it invalidates any that reference it

that is the lazy approach. my approach is to recalculate immediately any 
dependendent value, hence dataflow (animation!) arises /for real/, not 
as a sloppy metaphor.

> 
> ...bidirectional is far more work than I am
> willing to do.

might even be intractable.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: cr88192
Subject: Re: Animation(tm)! [was Re: A pox on laziness! [was Re: A Critique of Common Lisp]]
Date: 
Message-ID: <vid9l01hll2937@corp.supernews.com>
>
> cr88192 wrote:
> >>[snip]
> >
> >
> > much of this has gotten me thinking, I may just rip you off.
> >
> > since I have the liberty of my own interpreter, I can do this within the
> > interpreter, though it will likely be a little kludgy.
> >
> > I will state my idea, pardon if I seriously missed things:
> >
> > constraints would exist as fragments;
> > each fragment would be initially invalid;
> > once a fragment is computed, it tries to fetch the values of any
fragments
> > it references;
> > if a fragment fetches a value from another fragment, the former is noted
in
> > the latter to propagate invalidations;
>
> yes. unfortunately, this breaks GC, because when the app is thru with
> something it has to tell other things to forget about it explicitly or
> it just stays around forever (and being recalculated). So a fragment
> needs to know not just who uses it, but who it uses so it can shuffle
> off this mortal coile gracefully.
>
hmm, yes, I did not take this into account. I could finish weak reference
support and just use weak references for the registered fragments (and on
invalidation/registration any dropped references are cleared).
what I was unsure of was how to handle weak references without using a
second duplicate/dummy mark phase to handle references that went away (worth
noting: in my idea weak references are collected and just replaced by a null
value if they go away, then other code is expected to notice this).
another thought had been that they would be noted, and somehow a way of
later assigning the value would be needed (ie: calling the mark func with a
pointer to the reference, so that it can be noted and possibly cleared if
weak).
hmm, yes...

>  > if a fragment is invalidated, it invalidates any that reference it
>
> that is the lazy approach. my approach is to recalculate immediately any
> dependendent value, hence dataflow (animation!) arises /for real/, not
> as a sloppy metaphor.
>
ok, I was torn between the ideals of invalidation or forcing
recomputation...
there are good/bad points to either though.
hmm, decisions...

assuming they are recomputed only sparsly then lazy could get better
performance, and the cost of assigning will be more known (though using the
value could involve recomputation).
likely I will need a special primitive for assignment, since given the
samantics neither allow functions or application, I could tweak out set! but
that would make it nearly impossible to get rid of them besides lexical
scoping, and would from that point on effect the slot semantics...
I could add 'assign!', which would either 'set!' the slot or change the
value of a constraint fragment, depending on where it is used...

> >
> > ...bidirectional is far more work than I am
> > willing to do.
>
> might even be intractable.
>
I before read a paper talking about non-binary bidirectional constraint
solving, and I thought: no... I am not going to implement anything like
that...
it will be considered a programmer error if part of a computation happens to
refer back to itself, and shoving values at the output will have no effect
on the input, ...


oh yes:
I am splitting off my interpreter from my main project (it is now a dll, and
is just used by the project and not technically part of it, and it no longer
shares headers, ...).
I also decided to put the interpreter under lgpl as well (rest of project
will remain gpl).

annoyingly the gc is seperate from the interpreter, but I am unsure how I
would handle seperating it from the main project (as the store related
portion depends on file io, which is controlled by the main project). I
could make the gc another dll, and patch through the file interface the
interpreter uses, or have another gc-main app interface (besides file io and
printing, and heavy dependance on the interpreter, it is mostly self
contained).
splitting it from the main project could be done, but I am unsure if it
belongs in the interpreter either.

otherwise anyone using the interpreter would need to come up with their own
mm/gc, or rip the one from my project (needing to alter the file io/printing
stuff to go through whatever interface they use though...).

otherwise:
main project has been moved in the direction of an rpg...

all for now.

in case anyone cares:
http://sourceforge.net/projects/bgb-sys/
http://bgb-sys.sourceforge.net/
refer to my project.

note: a fair amount has been done since the 27'th, one of the few times
where I am doing stuff fairly quickly...
the net crap is probably broke in the version posted, there was a bug in the
code for writing 8 bit strings (ie: it would forget the string...).
From: Thomas F. Burdick
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <xcvlluwemq7.fsf@famine.OCF.Berkeley.EDU>
Kenny Tilton <·······@nyc.rr.com> writes:

> Is someone seriously suggesting a person using CL regularly ends up
> reinventing function X because they /forgot it was there/????

So, Kenny, you still remember that set-exclusive-or exists?  :)

I bet that one's the #1 for reinvention by experienced Lispers.
Certainly gotta be in the top 5.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Kenny Tilton
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F17437F.5050905@nyc.rr.com>
Thomas F. Burdick wrote:
> Kenny Tilton <·······@nyc.rr.com> writes:
> 
> 
>>Is someone seriously suggesting a person using CL regularly ends up
>>reinventing function X because they /forgot it was there/????
> 
> 
> So, Kenny, you still remember that set-exclusive-or exists?  :)

you mean?:

(defun xor (l1 l2)
    (nconc (remove-if (lambda (x) (find x l2)) l1)
       (remove-if (lambda (x) find x l1) l2))))

<g>

come to think of it, i bet we get to use a lot of these list operations 
in our RoboCup client as we winnow down the players in view to the ones 
we need to reckon with during play.

Figured out the saddamite could not kick the ball because it tried to 
get within the kickable margin before kicking (duh). but the server 
gives distances between centers, and the kick margin is measured between 
the perimeters of the ball and player, so we just have to get within the 
kickable margin plus the two radii.

we should have "play_on" in a few minutes!!! we should then have a nice 
sim of soccer for three year olds, because everyone (including the 
goalies!) is the instructed to chase the ball.

:)

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Jon S. Anthony
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <m3y8ywwhd1.fsf@rigel.goldenthreadtech.com>
···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Kenny Tilton <·······@nyc.rr.com> writes:
> 
> > Is someone seriously suggesting a person using CL regularly ends up
> > reinventing function X because they /forgot it was there/????
> 
> So, Kenny, you still remember that set-exclusive-or exists?  :)
> 
> I bet that one's the #1 for reinvention by experienced Lispers.
> Certainly gotta be in the top 5.

This most likely happens (at least it seems so to me) for one (or both)
of the following reasons:

1) The complexity of this (and the other set operations) in typical
   implementations is n^2

2) These operations only work on lists



/Jon
From: Matthias
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <bf5vt0$9vr$1@trumpet.uni-mannheim.de>
Don Geddis wrote:

>> Although having a simple language certainly contributes to the ease of
>> learning. A 1000 page manual somehow is discouraging.
> 
> Common Lisp is designed to be highly efficient for expert programmer to
> use.
> Easy of learning is secondary.  (Even that can be dealt with by a properly
> written tutorial.  Nobody is stopping an instructor from teaching a simple
> subset of CL.)

Common Lisp primarily was designed to be compartible to once flourishing 
Lisp dialects (thus the name Common Lisp).  It was designed to be easy to 
learn for users of these dialects.  Now that these dialects and knowledge 
about historial developments slowly vanish the design-goal "backwards 
compatibility" is questioned (by newcomers).  

In other words: Expert programmers are not more efficient because they use a 
language which is as irregular as the following lists demonstrate:

1.) (conc . nconc), (remove . delete), (map . map-into)
2.) null, alpha-char-p, arrayp, member
3.) member, adjoin, intersection, set-difference, subsetp

They (the expert programmers) might just not care.
From: Kent M Pitman
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <sfwispya4yw.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Don Geddis <···@geddis.org> writes:

> Humans are capable of remembering a huge amount of information.  Natural
> languages are much more complex than any artificial language.  Yet native
> speakers seem to do just fine.
> 
> This seems like a silly (top-level) goal.

I don't really disagree with your point, but I want to add the following
additional information as "background color":

There WAS some international complication about getting translations
of ANSI CL into other national languages.  I guess they are used to
chunking the cost of this into budget-sized smaller units.

Japan, for example, was much happier with the size of the ISLISP
specification, which I believe budget was created to translate to
Japanese.

However, the goals of the two languages are, as you state, really quite
different.  It's nice to have a language whose spec you can read or 
translate, but it's also nice to have one you can use...

> > Although having a simple language certainly contributes to the ease of
> > learning. A 1000 page manual somehow is discouraging.
> 
> Common Lisp is designed to be highly efficient for expert programmer to use.
> Easy of learning is secondary.  (Even that can be dealt with by a properly
> written tutorial.  Nobody is stopping an instructor from teaching a simple
> subset of CL.)

Exactly.  ANSI CL is _not_ a teaching text.  That it is confused with
one is more a remark on how much more accessible-than-average and
readable-than-average we made it.  In most other languages, no one would
consider reading the specification.

Scheme is quite readable, but then, it provides only about 1/10th the
overall functionality of ANSI CL, and leaves out all the complicated
details of error handling, branch cuts, portability concerns,
etc. that might "mess up the read"... and, speaking historically as
someone who was in the room at the time of the decision making for the
language, it seemed to me that many of the votes were explicitly
favoring "a good read" rather than these other matters (such as
portability).  Building on my experiences with CL, I suggested adding
some of that other material and many of the other Scheme authors made
clear their feeling that it was "just clutter".  As I recall, I got
strong sympathy in my position from the few people who were
maintaining actual implementations, and who got bug reports on these
things or had to make implementation decisions about these things, but
not from people whose goal was just to write (and defend) textbooks.
From: Frank A. Adrian
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <QSgRa.142$lf1.63220@news.uswest.net>
thelifter wrote:
> 2. Compiler Simplicity. It should be possible to write a compiler
> which produces
> very efficient code, and it should not be too great an undertaking to
> write such a
> compiler perhaps a man-year at the outside.

This may have been true in the early nineties, when code generation was easy
to do well on most processors.  Today, with the complexity and
interdependencies of instruction timings, it would be almost impossible to
build a compiler that implemented "very efficient code" in a man-year. 
This is true even for simple languages like C.  The hard part (as in C
compilers these days) is register allocation and instruction scheduling. 
Once the lisp compiler can build the dataflow graphs that describe the
type-checking code, the compilers look almost the same.  And, they're both
hard to build.  So it goes...

faa
From: Kaz Kylheku
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <cf333042.0307161421.7e41b9da@posting.google.com>
·········@gmx.net (thelifter) wrote in message news:<····························@posting.google.com>...
> For example why does a Hashtable need to be part of the language
> definition.

Probably because it's already used in the guts of the package system
and reader for looking up symbols, and probably used within the
compiler and other pieces. So it would probably be hard to remove
hashes from a typical Lisp implementation.

> If you put most stuff into packages, if the user doesn't need a
> hashtable, he can work with a smaller language. Just add what you
> need.

When N developers each add what they need, you end up with N
languages. When you try to combine code written by these developers,
you merge these languages. Four incompatible string classes, three
different hash tables, ...
 
> The practical side shows that Scheme seems to get it right. 

Scheme *implementations* are large. Scheme programs use the
extensions, and end up being tied to particular Scheme
implementations, even for features that could in principle be done
portably---things that are not platform bindings.

DrScheme, MzScheme and Chez Cheme and so forth are essentially
languages in their own right. There are DrScheme applications that
won't run on any other Scheme.

Allegro, CMUCL and Corman are also languages in their own right, but
they share a heck of a lot more than the Schemes. This makes it
possible to write portable software consisting of less code. The
software doesn't have to contain its own package system, object
system, etc.

When you are writing an application, you need certain pieces to be
there. It's just a matter of who brings what to the table. Some things
come from the language, some things from the implementation, some
things from third party libraries and some you write yourself.

We can make a pie chart with four slices to see how much came from
where. In Lisp programming, the ``from language'' slice is bigger.
This is good because things that come from the language have nice
properties. They are standard and portable. You don't have to write
them, or hunt them down and integrate them into your environment, and
you don't end up with multiple incompatible implementations of them in
the same image.

> Once you have
> a Scheme you can use all the packages that conform to the standard. So
> in principle you should have the power of CL but with much more
> simplicity in the core language.

Oops, Turing equivalence argument. Computational power is not
expressive power.
From: Barry Margolin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <dYkRa.239$0z4.216@news.level3.com>
In article <····························@posting.google.com>,
Kaz Kylheku <···@ashi.footprints.net> wrote:
>·········@gmx.net (thelifter) wrote in message
>news:<····························@posting.google.com>...
>> For example why does a Hashtable need to be part of the language
>> definition.
>
>Probably because it's already used in the guts of the package system
>and reader for looking up symbols, and probably used within the
>compiler and other pieces. So it would probably be hard to remove
>hashes from a typical Lisp implementation.

Also, EQ/EQL hash tables often need to hook into the GC mechanism.  So if
hash tables weren't built in, we would have had to provide a way for
applications to be notified when a GC has made rehashing necessary.

>> If you put most stuff into packages, if the user doesn't need a
>> hashtable, he can work with a smaller language. Just add what you
>> need.
>
>When N developers each add what they need, you end up with N
>languages. When you try to combine code written by these developers,
>you merge these languages. Four incompatible string classes, three
>different hash tables, ...

Which is pretty much why Common Lisp happened.  Not only were there a
number of incompatible Lisp dialects that needed to be reconciled, but
within each dialect there were lots of different versions of commonly-used
utilities and macros.  For instance, in the Maclisp community there were
several different IF macros, and when you were reading someone else's code
you needed to know which flavor they were using.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Damien R. Sullivan
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <bf4si5$g1k$1@hood.uits.indiana.edu>
Barry Margolin <··············@level3.com> wrote:

>>When N developers each add what they need, you end up with N
>>languages. When you try to combine code written by these developers,
>>you merge these languages. Four incompatible string classes, three
>>different hash tables, ...
>
>Which is pretty much why Common Lisp happened.  Not only were there a
>number of incompatible Lisp dialects that needed to be reconciled, but
>within each dialect there were lots of different versions of commonly-used
>utilities and macros.  For instance, in the Maclisp community there were
>several different IF macros, and when you were reading someone else's code
>you needed to know which flavor they were using.

My fear of CL is that the reconciliation happened by jamming several different
ways of doing things together, so as to make all users of the various dialects
happy.  I don't know how true that is.  Vs. a simple and elegant Scheme core
which was also full-featured and fast and portable.  I'm not sure the latter
exists, not if the latter includes multi-platform graphics, REPL, bignums, and
compiled speed within a close range of C++...

-xx- Damien X-) 
From: Kenny Tilton
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F15FC65.404@nyc.rr.com>
Damien R. Sullivan wrote:
> Barry Margolin <··············@level3.com> wrote:
> 
> 
>>>When N developers each add what they need, you end up with N
>>>languages. When you try to combine code written by these developers,
>>>you merge these languages. Four incompatible string classes, three
>>>different hash tables, ...
>>
>>Which is pretty much why Common Lisp happened.  Not only were there a
>>number of incompatible Lisp dialects that needed to be reconciled, but
>>within each dialect there were lots of different versions of commonly-used
>>utilities and macros.  For instance, in the Maclisp community there were
>>several different IF macros, and when you were reading someone else's code
>>you needed to know which flavor they were using.
> 
> 
> My fear of CL is that the reconciliation happened by jamming several different
> ways of doing things together, so as to make all users of the various dialects
> happy.  I don't know how true that is.  

But you are going to fear it anyway? Jeez, give the ansi committee some 
credit.

I think this discussion could upgrade itself from absurd to tiresome if 
someone would point to a specific case of "jamming several different
ways of doing things together".

Otherwise, can we get back to RoboCup? How's everyone's teams coming?

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Johan Kullstam
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <87oezt6wd9.fsf@sysengr.res.ray.com>
Kenny Tilton <·······@nyc.rr.com> writes:

> Damien R. Sullivan wrote:
> > Barry Margolin <··············@level3.com> wrote:
> >
> >>>When N developers each add what they need, you end up with N
> >>>languages. When you try to combine code written by these developers,
> >>>you merge these languages. Four incompatible string classes, three
> >>>different hash tables, ...
> >>
> >>Which is pretty much why Common Lisp happened.  Not only were there a
> >>number of incompatible Lisp dialects that needed to be reconciled, but
> >>within each dialect there were lots of different versions of commonly-used
> >>utilities and macros.  For instance, in the Maclisp community there were
> >>several different IF macros, and when you were reading someone else's code
> >>you needed to know which flavor they were using.
> > My fear of CL is that the reconciliation happened by jamming several
> > different
> > ways of doing things together, so as to make all users of the various dialects
> > happy.  I don't know how true that is.
> 
> But you are going to fear it anyway? Jeez, give the ansi committee
> some credit.
> 
> I think this discussion could upgrade itself from absurd to tiresome
> if someone would point to a specific case of "jamming several different
> ways of doing things together".

Well, there are the various getting functions.  Graham points out in
_On Lisp_ pp. 195-197 how various function handle getting NIL as the
value versus returning NIL meaning they couldn't find what you're
looking for.  This is due to historical reasons since the imho more
clever method used by GETHASH couldn't exist until the multiple value
return was added to Lisp.

GETHASH returns two values, the thing gotten and a flag to say if it
exists.

GET returns one value but takes a "default" argument to return in case
the property really could be NIL.  Usually you set default to a
gensym.  This is a weird way of doing things and not half as smooth as
the two return values of GETHASH.

ASSOC and MEMBER return a pointer to extra structure (which doesn't
cost anything consing-wise, but is different).

It would be easier if the accessor functions were made more regular.
They didn't so much jam together as accrete, but some inconsistancy is
allowed to keep backward compatibility.

> Otherwise, can we get back to RoboCup? How's everyone's teams coming?
> 
> -- 
> 
>   kenny tilton
>   clinisys, inc
>   http://www.tilton-technology.com/
>   ---------------------------------------------------------------
> "Everything is a cell." -- Alan Kay
> 

-- 
Johan KULLSTAM <··········@comcast.net> sysengr
From: Barry Margolin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <XCzRa.244$0z4.126@news.level3.com>
In article <············@hood.uits.indiana.edu>,
Damien R. Sullivan <········@cs.indiana.edu> wrote:
>My fear of CL is that the reconciliation happened by jamming several different
>ways of doing things together, so as to make all users of the various dialects
>happy.

Your point is valid.  When the CL designers adopted features from existing
dialects and macro/utility packages, they mostly tried to avoid changing
them incompatibly.  Since there was no coordination in the original design
of these features, the resulting language is a bit of a hodge-podge.

Most of CL came out of either Maclisp or Zetalisp, and each of these was
pretty self-consistent.  So the result isn't *too* bad.  It's like spelling
and conjugation: a few rules handle most cases, and you learn the handful
of exceptions by rote.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Joe Marshall
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <4r1lhv73.fsf@ccs.neu.edu>
Barry Margolin <··············@level3.com> writes:

> Most of CL came out of either Maclisp or Zetalisp, and each of these was
> pretty self-consistent.  So the result isn't *too* bad.  

Although many of the Zetalisp features were `stripped down' in order
to be `conservative', so there are some warts there, too.
From: Wade Humeniuk
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <lKzRa.17902$xn5.2278431@news0.telusplanet.net>
"Barry Margolin" <··············@level3.com> wrote in message
······················@news.level3.com...
> In article <············@hood.uits.indiana.edu>,
> Damien R. Sullivan <········@cs.indiana.edu> wrote:
> >My fear of CL is that the reconciliation happened by jamming several different
> >ways of doing things together, so as to make all users of the various dialects
> >happy.
>
> Your point is valid.  When the CL designers adopted features from existing
> dialects and macro/utility packages, they mostly tried to avoid changing
> them incompatibly.  Since there was no coordination in the original design
> of these features, the resulting language is a bit of a hodge-podge.
>

Personally I like the hodge-podge.  I like that it has flaws and inconsistencies.
It makes it more "human", more fallible.  It also gives it history, a connection
to the past.  A language could be more consistent and hygenic, but it would
become a sterile dead thing and give an illusion that things can be perfect in the
way suggested.  As I get older I like messiness better, it is more honest.

Wade
From: Kenny Tilton
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F16CE75.70309@nyc.rr.com>
Wade Humeniuk wrote:
> "Barry Margolin" <··············@level3.com> wrote in message
> ······················@news.level3.com...
> 
>>In article <············@hood.uits.indiana.edu>,
>>Damien R. Sullivan <········@cs.indiana.edu> wrote:
>>
>>>My fear of CL is that the reconciliation happened by jamming several different
>>>ways of doing things together, so as to make all users of the various dialects
>>>happy.
>>
>>Your point is valid.  When the CL designers adopted features from existing
>>dialects and macro/utility packages, they mostly tried to avoid changing
>>them incompatibly.  Since there was no coordination in the original design
>>of these features, the resulting language is a bit of a hodge-podge.

I thought we were talking about different ways of doing the /same/ 
thing, and was braced for elt-nth-aref or car-first. Inconsistencies, 
sure. But I still have the context of gabriel et als "too big to keep it 
all in my mind" oddity.

> It also gives it history, a connection
> to the past.  

Amen. If someone were now to clean up the idiosyncrasies we would have 
an infinitesimally cleaner language with considerably less charm for 
those of us who get a kick out of using a language going on fifty and 
who like the punch-card feel of using car and cdr to code circles around 
    "modern" java.

Lisp is a wooden sloop slicing and dicing its way thru the graphite + 
carbon fiber America's Cup fleet of Java, Python and Perl. (Call me a 
Luddite, but I love it when one of the real Cup competitors gets into a 
little chop and decides to snap in two. <g>)

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Barry Margolin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <fTARa.245$0z4.132@news.level3.com>
In article <··············@nyc.rr.com>,
Kenny Tilton  <·······@nyc.rr.com> wrote:
>I thought we were talking about different ways of doing the /same/ 
>thing, and was braced for elt-nth-aref or car-first. Inconsistencies, 
>sure. But I still have the context of gabriel et als "too big to keep it 
>all in my mind" oddity.

While CL suffers from that problem, I don't think it makes the language
hard to learn or use, because you don't have to remember all the ways of
doing something.  If you learn a particular style, it often doesn't matter
that you have learned the other, similar ways of doing it.

What makes a language too complex as it grows is when you have to learn
lots of inconsistent ways of doing things.  For instance, Maclisp had a
different mutator function for each type of object, with no pattern to
their names (SET for symbol value cells, RPLACA/RPLACD for conses, I can't
even remember what it used for arrays, etc.); Common Lisp retained most of
these for compatibility, but added SETF as the recommended, general-purpose
assignment operator.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Joerg-Cyril Hoehle
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <uel09uwix.fsf@T-Systems.com>
Kenny Tilton <·······@nyc.rr.com> writes:
> But I still have the context of gabriel et als "too big to keep it 
> all in my mind" oddity.

Did one of you also consider all the other things that today's typical
application programmer need to know about?

o the application domain
o SQL
 - Oracle idiosyncracies
 - MSSQL idiosyncracies
 - MySQL idiosyncracies
o interfacing to SQL databases (e.g. ODBC/JDBC)
o HTML
 - 1.0, 1.1, 2.0, 3.2, 4.0
 - DTD -- most pogrammers short-circuit that
o HTTP possibly
 - (chunked encoding, sessions, caching etc.)
 - some aspects of its configuration (lifetime of servlets etc.)
o XML
 - namespaces
 - Schema
 - XSLT (yuk)
o GUI
 - Motif/Gnome/MS-Windows-1000APIs/xyz
 - callback system (if such)
 - dependencies and order of invocation (beware of uninitialized slots)
 - layout engine configuration
o Javascript/ECMAscript
o dynamic HTML and DOM (modifying part of the HTML/DOM inside the browser)
o incompatibilities between MS-IE 4.x/5.0/5.5/6.x and Netscape or Mozilla
o ...
o design
o logic
o transaction concepts (most miss those)
o concepts from distributed systems
o testing concepts (design for testability or such)
o writing useful code documentation instead of superfluous junk
o ...

o not everything is a nail and I should know about more than a hammer.

Does it appear to anybody that this is also "too big to keep it all in
one's mind"? Do these people draw the natural conclusion that such
systems built by too few people with too limited and partial knowledge
therefore cannot be the best concievable ones?


The beauty of Lisp is that with it, I don't need many of the modern
stuff (crap) which seems just designed to waste human resources.
E.g. I use SXML, tree transformations and macros instead of
unmaintainable XSLT to obtain nicely factored, readable/reviewable
code/data and some (X)HTML/XML generation facility which guaranteedly
generates well-formed X/HT/ML instead of embedded print('</table>)
statements.


I do not feel that CL is too large. I had a recent Scheme experience
where I spent several days I could have spent on something more
valuable, porting code which is known to work on Scheme system-A to
Scheme system-B because Scheme is IMHO too small for practice:
Everything had to do with things already in CL: WITH-OUTPUT-TO-STRING,
FILE-POSITION, HANDLER-CASE, &OPTIONAL, declarations etc. (yes, I know
about SRFIs, but what implementation features what SRFI-n?).


Another example where the Scheme reference is IMHO too succint,
whereas ANSI-CL is precisely defined: can you tell me the result of 
(positive? 0)
(negative? 0)
just from the R5RS or R4RS?
Why does XML Schema have both types like nonNegativeInteger and
positiveInteger?

Regards,
	Joerg Hoehle
TSI ITC-Security Technologiezentrum
From: Bruce Lewis
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <nm9ispl1c44.fsf@magic-pi-ball.mit.edu>
Joerg-Cyril Hoehle <······@spam.com> writes:

> I do not feel that CL is too large. I had a recent Scheme experience
> where I spent several days I could have spent on something more
> valuable, porting code which is known to work on Scheme system-A to
> Scheme system-B because Scheme is IMHO too small for practice:

You CL advocates are always so careful not to offend anybody.  Go ahead,
name the implementations.  We can take it.

> Another example where the Scheme reference is IMHO too succint,
> whereas ANSI-CL is precisely defined: can you tell me the result of 
> (positive? 0)
> (negative? 0)
> just from the R5RS or R4RS?

Is this a trick question?  Of course they both return #f.

> Why does XML Schema have both types like nonNegativeInteger and
> positiveInteger?

Another trick question?  The former would include zero, the latter
wouldn't.
From: Anton van Straaten
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <IdyVa.26444$Mc.2063457@newsread1.prod.itd.earthlink.net>
Joerg Hoehle wrote:
> Another example where the Scheme reference is IMHO too succint,
> whereas ANSI-CL is precisely defined: can you tell me the result of
> (positive? 0)
> (negative? 0)
> just from the R5RS or R4RS?

Er, may I refer you to the universally-accepted definitions of the words
'positive' and 'negative' as they relate to numbers?

http://mathworld.wolfram.com/Positive.html : A quantity greater than zero
http://mathworld.wolfram.com/Negative.html : A quantity less than zero

Specification documents exist in a context.  You're suggesting it's a
disadvantage that RnRS doesn't define basic arithmetic concepts?

> Why does XML Schema have both types like nonNegativeInteger and
> positiveInteger?

http://mathworld.wolfram.com/Nonnegative.html : A quantity which is either
zero or positive.  Of course, it's trivial to define this in terms of
predicates like negative? and integer?, which is why R5RS doesn't define it.

The goals of RnRS are not the same as the goals of the CL spec.

Anton
From: ····@sonic.net
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F26BD30.4B9AEB5E@sonic.net>
Anton van Straaten wrote:
> 
> Joerg Hoehle wrote:
> > Another example where the Scheme reference is IMHO too succint,
> > whereas ANSI-CL is precisely defined: can you tell me the result of
> > (positive? 0)
> > (negative? 0)
> > just from the R5RS or R4RS?
> 
> Er, may I refer you to the universally-accepted definitions of the words
> 'positive' and 'negative' as they relate to numbers?
> 
> http://mathworld.wolfram.com/Positive.html : A quantity greater than zero
> http://mathworld.wolfram.com/Negative.html : A quantity less than zero
> 
> Specification documents exist in a context.  You're suggesting it's a
> disadvantage that RnRS doesn't define basic arithmetic concepts?
> 

Actually, wait.  What's the sign of inexact zero?  It's 
true that exact zero is neither positive nor negative, 
but the standard doesn't specify what the predicates 
return for inexact zero; following the mathematical 
recommendations in _Much ado about Nothing's Sign Bit_ 
would have both predicates returning #t if the quantity
were inexact. 

He posed the problem as (positive? 0) and (negative? 0), 
using the exact-numbers notation, so both predicates 
*ought* to return #f.  But if you do serious mathematics
with scheme, you quickly learn that proper support for 
the exact/inexact distinction is patchy at best.

				Bear
From: Matthias Blume
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <m1he551527.fsf@tti5.uchicago.edu>
····@sonic.net writes:

> Anton van Straaten wrote:
> > 
> > Joerg Hoehle wrote:
> > > Another example where the Scheme reference is IMHO too succint,
> > > whereas ANSI-CL is precisely defined: can you tell me the result of
> > > (positive? 0)
> > > (negative? 0)
> > > just from the R5RS or R4RS?
> > 
> > Er, may I refer you to the universally-accepted definitions of the words
> > 'positive' and 'negative' as they relate to numbers?
> > 
> > http://mathworld.wolfram.com/Positive.html : A quantity greater than zero
> > http://mathworld.wolfram.com/Negative.html : A quantity less than zero
> > 
> > Specification documents exist in a context.  You're suggesting it's a
> > disadvantage that RnRS doesn't define basic arithmetic concepts?
> > 
> 
> Actually, wait.  What's the sign of inexact zero?  It's 
> true that exact zero is neither positive nor negative, 
> but the standard doesn't specify what the predicates 
> return for inexact zero; following the mathematical 
> recommendations in _Much ado about Nothing's Sign Bit_ 
> would have both predicates returning #t if the quantity
> were inexact. 
> 
> He posed the problem as (positive? 0) and (negative? 0), 
> using the exact-numbers notation, so both predicates 
> *ought* to return #f.  But if you do serious mathematics
> with scheme, you quickly learn that proper support for 
> the exact/inexact distinction is patchy at best.
> 
> 				Bear

This is a fundamental problem with the idea of making some but not all
sets of values distinguish between exact and inexact versions.
In Scheme, only numbers can be both inexact and exact; all other data
is considered exact.  But, of course, the result of

   (zero? x)

should be an inexact boolean if x's value is an inexact number...

In other words, I personally find the distinction between inexact and
exact numbers in Scheme merely cute, but not terribly useful in practice.

Matthias
From: Joe Marshall
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <k7a1kqh3.fsf@ccs.neu.edu>
Matthias Blume <········@shimizu-blume.com> writes:

> In other words, I personally find the distinction between inexact and
> exact numbers in Scheme merely cute, but not terribly useful in practice.

Agreed.  If inexact numbers are represented by floating point numbers
(the standard practice), the computer is actually doing `inexact' (but
well-defined) computation on an exact representation.

Somebody check my temperature.  This is the second time in as many
weeks that I've agreed with Matthias.
From: ····@sonic.net
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3F26E5AA.E81D485B@sonic.net>
Matthias Blume wrote:
> 
> ····@sonic.net writes:

> > with scheme, you quickly learn that proper support for
> > the exact/inexact distinction is patchy at best.
> >
> >                               Bear
> 
> This is a fundamental problem with the idea of making some but not all
> sets of values distinguish between exact and inexact versions.
> In Scheme, only numbers can be both inexact and exact; all other data
> is considered exact.  But, of course, the result of
> 
>    (zero? x)
> 
> should be an inexact boolean if x's value is an inexact number...

I've had that same thought.  Perhaps someone will implement #i#t and #i#f 
and we will see whether they're useful.  

I don't think, though, that there is any reasonable rationale for inexact
characters and strings. 

				Bear
From: Anton van Straaten
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <5nDVa.26808$Mc.2089049@newsread1.prod.itd.earthlink.net>
Bear wrote:
> Matthias Blume wrote:
> >
> > ····@sonic.net writes:
>
> > > with scheme, you quickly learn that proper support for
> > > the exact/inexact distinction is patchy at best.
> > >
> > >                               Bear
> >
> > This is a fundamental problem with the idea of making some but not all
> > sets of values distinguish between exact and inexact versions.
> > In Scheme, only numbers can be both inexact and exact; all other data
> > is considered exact.  But, of course, the result of
> >
> >    (zero? x)
> >
> > should be an inexact boolean if x's value is an inexact number...
>
> I've had that same thought.  Perhaps someone will implement #i#t and #i#f
> and we will see whether they're useful.

According to http://bayes.wustl.edu/ : "probability theory, as originated by
Laplace, is a generalization of Aristotelian logic that reduces to deductive
logic in the special case that our hypotheses are either true or false".
Sounds like the perfect application for inexact booleans...

Anton
From: Jeffrey Mark Siskind
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <a520654a.0307300512.587f6563@posting.google.com>
> > This is a fundamental problem with the idea of making some but not all
> > sets of values distinguish between exact and inexact versions.
> > In Scheme, only numbers can be both inexact and exact; all other data
> > is considered exact.  But, of course, the result of
> > 
> >    (zero? x)
> > 
> > should be an inexact boolean if x's value is an inexact number...
> 
> I've had that same thought.  Perhaps someone will implement #i#t and #i#f 
> and we will see whether they're useful.  
> 
> I don't think, though, that there is any reasonable rationale for inexact
> characters and strings. 

The rationale is simple.
(integer->char (exact->inexact k)) should return an inexact char and
(string (integer->char (exact->inexact k))) should return an inexact
string
(or at least a string with an inexact char in it). The whole reason
for the
exact/inexact distinction is that certain properties are lost when a
calculation yields inexact results. When the inexact tag is lost, by
something
like (char->integer (integer->char (exact->inexact k))) you lose track
of when those properties are present or absent.
From: Russell Wallace
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <3f2824df.107747403@news.eircom.net>
On 30 Jul 2003 06:12:42 -0700, ····@purdue.edu (Jeffrey Mark Siskind)
wrote:

>The whole reason
>for the
>exact/inexact distinction is that certain properties are lost when a
>calculation yields inexact results.

No, the whole reason for the exact/inexact distinction is that the
compiler needs to provide both rationals and floating point (most
languages choose to call the latter "double", Scheme chooses to call
it "inexact"; that's a purely aesthetic distinction), and it obviously
needs to keep track of which is which; given that, you might as well
make that information available to the programmer as well.

There's no corresponding advantage to be gained from supporting an
inexact variant of other types, because always using the exact
versions doesn't entail having your calculations take a million years
to run.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Joe Marshall
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <smopku56.fsf@ccs.neu.edu>
····@sonic.net writes:

> Actually, wait.  What's the sign of inexact zero?  

Which inexact zero?
From: Michael Livshin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <s37k6186kf.fsf@laredo.verisity.com.cmm>
Joe Marshall <···@ccs.neu.edu> writes:

> ····@sonic.net writes:
>
>> Actually, wait.  What's the sign of inexact zero?  
>
> Which inexact zero?

African, probably?

-- 
May all your PUSHes be POPped.
From: Joe Marshall
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <oezdkqtt.fsf@ccs.neu.edu>
Michael Livshin <······@cmm.kakpryg.net> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> ····@sonic.net writes:
>>
>>> Actually, wait.  What's the sign of inexact zero?  
>>
>> Which inexact zero?
>
> African, probably?

Are you suggesting that zeroes migrate?!
From: Matt Curtin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <86he54wy9l.fsf@rowlf.interhack.net>
Joe Marshall <···@ccs.neu.edu> writes:

> Michael Livshin <······@cmm.kakpryg.net> writes:
>
>> Joe Marshall <···@ccs.neu.edu> writes:
>>
>>> ····@sonic.net writes:
>>>
>>>> Actually, wait.  What's the sign of inexact zero?  
>>>
>>> Which inexact zero?
>>
>> African, probably?
>
> Are you suggesting that zeroes migrate?!

Not at all, they can be curried!

-- 
Matt Curtin, CISSP, IAM, INTP.  Keywords: Lisp, Unix, Internet, INFOSEC.
Founder, Interhack Corporation +1 614 545 HACK http://web.interhack.com/
Author of /Developing Trust: Online Privacy and Security/ (Apress, 2001)
From: Michael Livshin
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <s33cgo8qab.fsf@laredo.verisity.com.cmm>
Joe Marshall <···@ccs.neu.edu> writes:

> Michael Livshin <······@cmm.kakpryg.net> writes:
>
>> Joe Marshall <···@ccs.neu.edu> writes:
>>
>>> ····@sonic.net writes:
>>>
>>>> Actually, wait.  What's the sign of inexact zero?  
>>>
>>> Which inexact zero?
>>
>> African, probably?
>
> Are you suggesting that zeroes migrate?!

only those in the most significant position can afford any food during
the winter months.

-- 
Roses are red,
  Violets are blue,
I'm schizophrenic...
  And I am too.
From: Bruce Lewis
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <nm9el09131o.fsf@magic-pi-ball.mit.edu>
····@sonic.net writes:

> But if you do serious mathematics
> with scheme, you quickly learn that proper support for 
> the exact/inexact distinction is patchy at best.

Would better-defined negative? positive? and zero? predicates for
inexact numbers really help this "serious mathematics"?  My expectation
for the zero? predicate in particular is that it won't work predictably
with inexact numbers.  Are my expectations too low?
From: Joe Marshall
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <fzkpkq1g.fsf@ccs.neu.edu>
Bruce Lewis <·······@yahoo.com> writes:

> Would better-defined negative? positive? and zero? predicates for
> inexact numbers really help this "serious mathematics"?  My expectation
> for the zero? predicate in particular is that it won't work predictably
> with inexact numbers.  Are my expectations too low?

I'd say so.

I'd at least expect zero? to reliably return the same value each time
it was presented with the same number.  Furthermore, I'd expect zero?
to return true for the number created by reading "0.0".  I'd expect
that any number for which zero? returned true to print as "0.0"
(possibly "-0.0" as well).  Finally, I'd expect that for any number X,
(zero? (- x x)) is true.
From: Jeffrey M. Vinocur
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <bg77o3$f0q$1@puck.litech.org>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>
>Finally, I'd expect that for any number X, (zero? (- x x)) is true.

Oo, I don't think you mean that one!

What if x is an infinite quantity?  A number, but one that gives
NaN when subtracted from itself.


-- 
Jeffrey M. Vinocur   *   ·····@cornell.edu
http://www.people.cornell.edu/pages/jmv16/
From: Joe Marshall
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <7k5yltf1.fsf@ccs.neu.edu>
·····@cornell.edu (Jeffrey M. Vinocur) writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>>
>>Finally, I'd expect that for any number X, (zero? (- x x)) is true.
>
> Oo, I don't think you mean that one!
>
> What if x is an infinite quantity?  A number, but one that gives
> NaN when subtracted from itself.

You're right.  

``For any non-infinite number X...''?
From: Anton van Straaten
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <K1EVa.26878$Mc.2091674@newsread1.prod.itd.earthlink.net>
Bear wrote:
> Actually, wait.  What's the sign of inexact zero?  It's
> true that exact zero is neither positive nor negative,
> but the standard doesn't specify what the predicates
> return for inexact zero; following the mathematical
> recommendations in _Much ado about Nothing's Sign Bit_
> would have both predicates returning #t if the quantity
> were inexact.

Based on standard definitions, I would expect both positive? and negative?
to return #f if zero? returns #t.  Also, if zero? returns #t, then both (> x
0) and (< x 0) should return #f.  I would expect a departure from this to be
documented in the report, since it would seem to violate standard
definitions.

Whether zero? returns #t for an inexact number is covered by the note about
comparing inexacts: "the results may be unreliable because a small
inaccuracy may affect the result".  Ditto for (> x 0) and (< x 0).  It would
certainly be possible to define zero? more precisely, but that shouldn't
affect the *definitions* of positive? or negative?, which are defined in
terms of a number's relationship to zero.

I haven't read _Much ado about Nothing's Sign Bit_ and IANAMathematician.
I'm curious to know the justification for having positive? and negative?
both return #t for the same inexact value.  That would imply that a number
can be both greater than *and* less than zero.  That seems logically
problematic.  It would make more sense to have the predicates throw
exceptions if they can't return a logically consistent result.

> He posed the problem as (positive? 0) and (negative? 0),
> using the exact-numbers notation, so both predicates
> *ought* to return #f.  But if you do serious mathematics
> with scheme, you quickly learn that proper support for
> the exact/inexact distinction is patchy at best.

That's true, but I don't see that it should affect the definitions of
positive? and negative?

Anton
From: Erann Gat
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <gat-2907031935400001@192.168.1.51>
In article <······················@newsread1.prod.itd.earthlink.net>,
"Anton van Straaten" <·····@appsolutions.com> wrote:

> I haven't read _Much ado about Nothing's Sign Bit_ and IANAMathematician.
> I'm curious to know the justification for having positive? and negative?
> both return #t for the same inexact value.  That would imply that a number
> can be both greater than *and* less than zero.  That seems logically
> problematic.

Not if you interpret the semantics of positive? to be "possibly positive"
instead of "definitely positive."

E.
From: Anton van Straaten
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <muIVa.125363$Io.10668782@newsread2.prod.itd.earthlink.net>
Erann Gat wrote:
> "Anton van Straaten" <·····@appsolutions.com> wrote:
>
> > I haven't read _Much ado about Nothing's Sign Bit_ and
IANAMathematician.
> > I'm curious to know the justification for having positive? and negative?
> > both return #t for the same inexact value.  That would imply that a
number
> > can be both greater than *and* less than zero.  That seems logically
> > problematic.
>
> Not if you interpret the semantics of positive? to be "possibly positive"
> instead of "definitely positive."

Ah!  Thanks, I knew there was something I must be missing.

If R5RS intended to describe that sort of functionality, though, I would
have expected an explicit mention of it.

I suppose, if an implementation of Scheme wanted to give positive? and
negative? those meanings, it could do so without technically violating the
report, and perhaps that's intended (?)  But I think there are some strong
arguments for using separate functions, e.g. possibly-positive? and
possibly-negative?  For a start, this would cater to naive
non-mathematicians with determinedly simplistic world views, such as myself;
and it would also avoid confusion with the exact versions of those
functions, which would have different semantics.

Anton
From: Erann Gat
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <gat-3007030751130001@192.168.1.51>
In article <························@newsread2.prod.itd.earthlink.net>,
"Anton van Straaten" <·····@appsolutions.com> wrote:

> Erann Gat wrote:
> > "Anton van Straaten" <·····@appsolutions.com> wrote:
> >
> > > I haven't read _Much ado about Nothing's Sign Bit_ and
> IANAMathematician.
> > > I'm curious to know the justification for having positive? and negative?
> > > both return #t for the same inexact value.  That would imply that a
> number
> > > can be both greater than *and* less than zero.  That seems logically
> > > problematic.
> >
> > Not if you interpret the semantics of positive? to be "possibly positive"
> > instead of "definitely positive."
> 
> Ah!  Thanks, I knew there was something I must be missing.
> 
> If R5RS intended to describe that sort of functionality, though, I would
> have expected an explicit mention of it.
> 
> I suppose, if an implementation of Scheme wanted to give positive? and
> negative? those meanings, it could do so without technically violating the
> report, and perhaps that's intended (?)

All R5RS has to say about "positive?" and "negative?" is:

"These numerical predicates test a number for a particular property,
returning #t or #f .  See note above. "

So they could do pretty much anything without technically violating the
standard.

("positive?" returns #T if its argument is positive, but what this means
depends on what the meaning of the word "is" is :-)

>  But I think there are some strong
> arguments for using separate functions, e.g. possibly-positive? and
> possibly-negative?  For a start, this would cater to naive
> non-mathematicians with determinedly simplistic world views, such as myself;
> and it would also avoid confusion with the exact versions of those
> functions, which would have different semantics.

You'll have to take that up with the Scheme committee.

E.
From: Anton van Straaten
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <hp3Wa.126696$Io.10769694@newsread2.prod.itd.earthlink.net>
> > > Not if you interpret the semantics of positive? to be "possibly
positive"
> > > instead of "definitely positive."
> >
> > Ah!  Thanks, I knew there was something I must be missing.
> >
> > If R5RS intended to describe that sort of functionality, though, I would
> > have expected an explicit mention of it.
> >
> > I suppose, if an implementation of Scheme wanted to give positive? and
> > negative? those meanings, it could do so without technically violating
the
> > report, and perhaps that's intended (?)
>
> All R5RS has to say about "positive?" and "negative?" is:
>
> "These numerical predicates test a number for a particular property,
> returning #t or #f .  See note above. "
>
> So they could do pretty much anything without technically violating the
> standard.

That's where I go back to a point I made earlier, which is that a standard
exists in a context, and it wouldn't make much sense to for RnRS to make
positive? and negative? simply reserved words for arbitrary functions from
number -> boolean.  But perhaps you agree:

> ("positive?" returns #T if its argument is positive, but what this means
> depends on what the meaning of the word "is" is :-)

Nice way to put it!

> >  But I think there are some strong
> > arguments for using separate functions, e.g. possibly-positive? and
> > possibly-negative?  For a start, this would cater to naive
> > non-mathematicians with determinedly simplistic world views, such as
myself;
> > and it would also avoid confusion with the exact versions of those
> > functions, which would have different semantics.
>
> You'll have to take that up with the Scheme committee.

Seems to me that these issues could be addressed with a SRFI...

Anton
From: Paolo Amoroso
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <um8VP5zg878qTOmuKL5E+BDRjGIv@4ax.com>
[followup posted to comp.lang.lisp only]

On 15 Jul 2003 19:44:39 -0700, ·········@gmx.net (thelifter) wrote:

> My personal Summary:
[...]
> 1) Lisp is too large, it cannot be implemented very efficiently or
> better: it can't be
> implemented easyly.

You may try a smaller language, such as ISLISP:

  http://islisp.info


> How many CLs that work with Java are there? AFAIK not one, and this is
> not likely to change, because CL is so HUGE. It would take a LONG time

Depending on what you mean by "work with Java", Franz provides good Java
integration in Allegro CL.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Christopher Browne
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <bf4hga$ai6ou$1@ID-125932.news.uni-berlin.de>
·········@gmx.net (thelifter) writes:
> "2. Compiler Simplicity. It should be possible to write a compiler
> which produces very efficient code, and it should not be too great
> an undertaking to write such a compiler perhaps a man-year at the
> outside."

> I think Lisp fails here. How long does it take to write a good CL
> compiler/interpreter/system? How long does it take for Scheme?

In what useful language is it trivial to implement a good compiler?

C?  Certainly not.  C++?  Preposterous.  Ada?  Nope.

Pointing at Scheme, the _good_ compiler is one that Siskind has been
working on for _years_.  The Guile people have been working on that
for years, and they don't even have much of a "compiler."
-- 
wm(X,Y):-write(X),write(·@'),write(Y). wm('cbbrowne','acm.org').
http://www3.sympatico.ca/cbbrowne/finances.html
"very few people approach me in real life and insist on proving they
are drooling idiots."  -- Erik Naggum, comp.lang.lisp
From: Jeffrey Mark Siskind
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <a520654a.0307161925.38afc531@posting.google.com>
> Pointing at Scheme, the _good_ compiler is one that Siskind has been
> working on for _years_.

The development time of Stalin is measured in five-year plans, not years.
I am about to finish my second five-year plan and start on the third.
The first archive of Stalin is dated r11nov93.
:-) or :-(, I'm not sure which is more appropriate.
From: Paolo Amoroso
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <qrwWP+omRQa90KRYBHpGHvW2coXK@4ax.com>
On 16 Jul 2003 20:25:39 -0700, ····@purdue.edu (Jeffrey Mark Siskind)
wrote:

> The development time of Stalin is measured in five-year plans, not years.

We already have Lispniks. We will also have Sputniks.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Gareth McCaughan
Subject: Re: A Critique of Common Lisp
Date: 
Message-ID: <878yqy5ahj.fsf@g.mccaughan.ntlworld.com>
[I'm sending this only to comp.lang.lisp. The world has had
enough crossposted CL/Scheme flamewars already. Plus a copy
to "thelifter" in case he's only reading c.l.scheme these
days. -- g]

"thelifter" wrote:

> I want to discuss the points raised at the beginning of the paper(you
> can skip to my personal summary at the end if you are in a hurry):
> 
> "1. Intellectual Conciseness. A language should be small enough for
> a programmer to be able keep all of it in mind when programming.
> This can be achieved through regularity in the design and a
> disciplined approach to the introduction of primitives and new
> programming concepts. PASCAL is a good example of such a language,
> although it is perhaps too concise."

Small languages make big programs. (I think I stole that
from Paul Graham or someone.)

> "2. Compiler Simplicity. It should be possible to write a compiler
> which produces very efficient code, and it should not be too great
> an undertaking to write such a compiler perhaps a man-year at the
> outside."
> 
> I think Lisp fails here. How long does it take to write a good CL
> compiler/interpreter/system? How long does it take for Scheme?

Depends on what you mean by "good". But writing a good Scheme
compiler is *not* easy. You need to do a lot of control-flow
inference, especially in the presence of call-with-current-continuation,
plus all the same type inference as you need for CL.

And, of course, there's all the pain that you have when trying
to write a good compiler for *any* language: actually generating
efficient code. Register allocation, instruction scheduling, blah.

> My personal Summary:
> 
> 1) Lisp is too large, it cannot be implemented very efficiently or
> better: it can't be implemented easily.

No language can be implemented both easily and really well.

There are implementations of Common Lisp that achieve performance
comparable to that of decent C compilers. ("Comparable" means:
when it's as much as 3-4 times worse, people feel really quite
aggrieved about it.) If that's not good enough, then I suppose
such languages as Perl and Python are completely sunk. Strange,
since I don't see any shortage of people using them, or of
programs written in them.

> For example why does a Hashtable need to be part of the language
> definition. No other language I know has it, everywhere Hashtables are
> provided as additional packages.

Then you do not know enough languages. Perl and Python,
for instance, both have them. Indeed, in those languages
they're "more built in" than they are in Lisp: there's
special notation for creating them, and various bits of
language internals use them.

> If you put most stuff into packages, if the user doesn't need a
> hashtable, he can work with a smaller language. Just add what you
> need.

How does the user benefit from this, exactly? And how does
the implementor benefit from it? (Hint: if you have to
implement hash tables anyway, then *having* to do so in
a cleanly separable way can't make it easier.)

> Inspite of being so large, Lisp doesn't have a standard way of doing
> GUIs, so usually most commercial Lisps have their own way of doing so,
> and this generates incompatibilities. Wouldn't it be easier to have
> this in a package, that would be implemented to run on any Lisp?

People have to work hard to make money so that they can eat
and go to the movies and buy fast cars. Wouldn't it be easier
if money just grew on trees so that everyone could take as
much as they wanted without having to work for it? :-)

> The practical side shows that Scheme seems to get it right. AFAIK
> there are much more Scheme implementations than CL implementations,
> and it is very easy to make one if it should be needed. Once you have
> a Scheme you can use all the packages that conform to the standard. So
> in principle you should have the power of CL but with much more
> simplicity in the core language.

In principle. Uh-huh. Does it work that way *in practice*?

Another way to look at the statement that "it's very easy
to make [a new Scheme implementation]" is this: having a
Scheme implementation doesn't really buy you all that much.
Remind me why that's good, again? :-)

> I think this also shows that the BIG language core is indeed a
> disadvantage not only for the implementor but also for the user. BIG
> core means less implementations, which means less programs, packages
> written for it, which means less users/uses, which means less
> implementations, etc...

Why does "less implementations" mean "less programs, packages
written for it"? How many implementations of Perl are there?

> Viewing from the point of view of the implementor a HUGE language core
> has its advantages of course: once you have written an implementation
> any startup will have to work very hard to catch up. Meanwhile you can
> add proprietary extensions so that your customers are unlikely to
> change to another implementation.

I suppose that's one way of looking at it, but I'm deeply
unconvinced (a) that the size of CL works to the benefit of
implementors in that particular way, and (b) that similar
benefits aren't available with smaller languages.

(I take it you're actually being ironic there, which is why
I mention (b).)

-- 
Gareth McCaughan
.sig under construc