From: Mark Tarver
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129717511.292559.25550@g43g2000cwa.googlegroups.com>
One of the biggest failures of Lisp in the UK is to
make it as the language of choice for teaching
UGs functional programming.

There are several reasons why.  One reason is that
people can more easily write horrible looking procedural programs
in Lisp than in any other FPL.  The language abounds
in procedural constructions, GO, FOR, LOOP and
expressions for stuffing values into places.  All useful
in certain contexts no doubt, but open to abuse.  I
see people writing FOR loops instead of using tail recursion.
Students who lack discipline can produce some horrendous
stuff in CL.  Self-taught programmers can fail to learn
to really understand things like data abstraction in CL.

ML and related languages do enforce more discipline on
the novice.  Later they can irritate when you want to let
go of mummy's hand and CL becomes attractive.  At
the beginning though, its good discipline.

The other thing that kills Lisp as a UG language is the
syntax. Simple in BNF terms sure, but lacking pattern-directed
programming and unlike Prolog, the symbols need quoting -
people regard it as old fashioned.  Type checking comes
up as an issue too.

There's lots of positive things I could say, but the header says
"CL failure stories" and this is the biggest.  It has knock-on
effect on the whole Lisp industry.  No UG Lisp hackers means
no domestic Lisp industry (no people for the jobs, so employers
dont put out the jobs).  The battle for Lisp is really lost in
education.

Mark

From: Paul F. Dietz
Subject: Re: CL failure stories?
Date: 
Message-ID: <ZKmdndEG0oQ4rsveRVn-pw@dls.net>
Mark Tarver wrote:

>  I see people writing FOR loops instead of using tail recursion.

And this is bad... why?

	Paul
From: Mark Tarver
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129728752.035489.284240@g49g2000cwa.googlegroups.com>
FOR is not necessarily a bad thing.  It has its
uses.  Only that if you're trying to teach a
bunch of students to think declaratively then
FOR helps not a bit.  People who are self-taught
Lispers seem to gravitate to the bits of Lisp
that remind them of what they know.  Here
Lisp obligingly allows them to entrench their
previous habits of thought from C++ or wherever.
You can see the results in a lot of obfuscated online
Lisp programs.

Now, with no offence, the CNF program of Mr
Vishnuvyas is an example of this.  He's a very clever chap
and I'm impressed with what he has done - but
it's not clear; it contains pushes, setfs and what
not.  Lisp allows him to do this.  If he were writing
in ML, his program would look clearer.  He'd probably
declare CNF as a type and so you would be able
to see at a glance what the grammar of his CNF
is.  As it is you have to try to work it out.  He
needs a bit of guidance, but CL doesn't give it.
CL expects that you come to it with your discipline
in place - and not all people have it.
From: vishnuvyas
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129748764.264046.304160@g43g2000cwa.googlegroups.com>
>Mark Tarver wrote:
>As it is you have to try to work it out.  He
>needs a bit of guidance, but CL doesn't give it.
>CL expects that you come to it with your discipline
>in place - and not all people have it.

Thats the reason I posted it on c.l.l, to find "a bit of guidance"...
From: K. Ari Krupnikov
Subject: Re: CL failure stories?
Date: 
Message-ID: <86u0fd8p9g.fsf@deb.lib.aero>
"Mark Tarver" <··········@ukonline.co.uk> writes:

> One of the biggest failures of Lisp in the UK is to
> make it as the language of choice for teaching
> UGs functional programming.
> 
> There are several reasons why.

When I was in grad school in ed.ac.uk, the official reason given was
NIH: Since Prolog was invented in Edinburgh, sort of, that's what they
teach. In America, they conceded, Lisp is more popular, but who would
want be to be like those Americans?

Ari.

-- 
Elections only count as free and trials as fair if you can lose money
betting on the outcome.
From: Mark Tarver
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129747102.719882.35750@g49g2000cwa.googlegroups.com>
> One of the biggest failures of Lisp in the UK is to
> make it as the language of choice for teaching
> UGs functional programming.

> I can't really see this as a failing of Lisp, it's like complaining
> that a Swiss army knife is defective because it does more than just
> whittle.

This analogy escapes me entirely.

> Lisp is *not* a functional language.

Wow!  I'm rubbing my eyes here.

Time for a reality check.

Lisp was the first pure functional language ....

Field & Harrison p. 86.

Guess me, Field and Harrison had better all check in the same clinic.
Mark
From: Geoffrey Summerhayes
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129749953.391228.244270@g47g2000cwa.googlegroups.com>
Mark Tarver wrote:
>
> > Lisp is *not* a functional language.
>
> Wow!  I'm rubbing my eyes here.
>
> Time for a reality check.
>
> Lisp was the first pure functional language ....
>
> Field & Harrison p. 86.

Then you have nothing to complain about, however ugly the programs
are, since they were written in a 'pure functional language', they
must be functional. I'm sure SETF, NREVERSE, LOOP, and GO will turn
out to be merely figments of my personal, distorted imagination. :-P

> Guess me, Field and Harrison had better all check in the same clinic.

Agreed, this myth is getting very old. Oh wait, I forgot they couldn't
be wrong, they're academics! Phew, you almost caught me on that.
(Also explains the dirty looks I got for correcting profs during class,
 I'm sure they were just pretending I was right)

---
Geoff
From: Duane Rettig
Subject: Re: CL failure stories?
Date: 
Message-ID: <4psq1nrfc.fsf@franz.com>
"Mark Tarver" <··········@ukonline.co.uk> writes:

>> One of the biggest failures of Lisp in the UK is to
>> make it as the language of choice for teaching
>> UGs functional programming.
>
>> I can't really see this as a failing of Lisp, it's like complaining
>> that a Swiss army knife is defective because it does more than just
>> whittle.
>
> This analogy escapes me entirely.
>
>> Lisp is *not* a functional language.
>
> Wow!  I'm rubbing my eyes here.

Time to wake up and smell the coffee.

> Time for a reality check.

Yep.

> Lisp was the first pure functional language ....

Not even Scheme, which claims to be a Lisp, claims
to be _purely_ functional.

> Field & Harrison p. 86.
>
> Guess me, Field and Harrison had better all check in the same clinic.
> Mark

Yep.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: ··············@hotmail.com
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129761440.084898.106570@z14g2000cwz.googlegroups.com>
> > Lisp is *not* a functional language.
>

Mark Tarver wrote:

> Wow!  I'm rubbing my eyes here.
>
> Time for a reality check.
>
> Lisp was the first pure functional language ....

Perhaps, if you were present at the invention of the term "functional
language" and before "pure" became a term applied to functional
languages, and have been living in a cave since then. In my rough
estimation that is somewhere before 1970. At which time Scheme didn't
exist, and Lisp was probably the *most* pure functional language
available.

> Field & Harrison p. 86.

1988 is the copyright date I found on a quick search. Perhaps things
have changed in the functional programming world in the last seventeen
years?
I wouldn't know, because I last bought a textbook on functional
programming in 1987.

Instead of a quick response of "Wow!" you ought to carefully analyze
what your definition of "pure functional language" is, and how it
compares to OTHER people's definition of "pure functional language"
before you put Common Lisp in that category.

Nowadays, Lisp looks pretty darn impure, unless you are comparing it to
assembly language or C/Pascal. The functional programming community
left *years* ago, and they went *thataway* (pointing vaguely toward
OCaml and Haskell).
From: Pisin Bootvong
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129796055.395753.26390@g43g2000cwa.googlegroups.com>
The first Lisp, as in mathematical representation of algorithm, might
be pure functional.

Pure functional language has no side effect, unless through Monad;
Haskell is pure functional language.

When Lisp is said as of today, it usually means Common Lisp.
Common Lisp has variable and you can do side effect, SETF and special
variable being the basic one.

So in short, Common Lisp is not pure functional language.
From: Mark Tarver
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129810121.990029.136910@g47g2000cwa.googlegroups.com>
I guess I need to complete the whole quote from Field & Harrison.
I hoped to avoid all this typing.  Here it is.

"Lisp was the first pure functional language and was designed
by John McCarthy in the early 1960s. Although the original Lisp
language was pure in that it was referentially transparent, the
dialects which evolved in subsequent years included many
imperative features, most notably constructs for performing
destructive assignment which destroyed the inherent simplicity
and elegance of the original language.  Embedded within all these
dialects, however, is a 'pure' subset which if considered in
isolation can be used to write functional programs as we have
come to know them."

I have no inclination to ferret around in Lisp 1.5 to find out
the state of play of Lisp in say 1963.  Its not important.
Neither Field, Harrison nor myself are daft enough to consider
that CL as it currently stands is a pure functional language.
Nor are we daft enough to say, as Mr Summerhayes says,
that it is not a functional language at all.  We regard it as a
functional language with a heavy accretion of procedural features.
On this matter, I don't think more space need be wasted,
since these facts are well known to everybody in the Lisp
community.  

Mark
From: Tayssir John Gabbour
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129813314.965945.169590@z14g2000cwz.googlegroups.com>
nMark Tarver wrote:
> I guess I need to complete the whole quote from Field & Harrison.
> I hoped to avoid all this typing.  Here it is.
>
> "Lisp was the first pure functional language and was designed
> by John McCarthy in the early 1960s. Although the original Lisp
> language was pure in that it was referentially transparent, the
> dialects which evolved in subsequent years included many
> imperative features, most notably constructs for performing
> destructive assignment which destroyed the inherent simplicity
> and elegance of the original language.  Embedded within all these
> dialects, however, is a 'pure' subset which if considered in
> isolation can be used to write functional programs as we have
> come to know them."
>
> I have no inclination to ferret around in Lisp 1.5 to find out
> the state of play of Lisp in say 1963.  Its not important.
> Neither Field, Harrison nor myself are daft enough to consider
> that CL as it currently stands is a pure functional language.
> Nor are we daft enough to say, as Mr Summerhayes says,
> that it is not a functional language at all.  We regard it as a
> functional language with a heavy accretion of procedural features.
> On this matter, I don't think more space need be wasted,
> since these facts are well known to everybody in the Lisp
> community.
>
> Mark

Yes, I think you were being clear. (Though everyone probably knows how
easy it is to actively try to misunderstand someone. ;)

What are the barriers to Qi's academia adoption? "Simply" books and
guis, or is there something more political/social? Time?

And the biggest criticism /I/ hear about Lisp is completely the
opposite -- the "CADDR style" of programming, the very pretentious,
aesthetic, ivory tower worldview they think Lisp naturally has. This
does conflict with the understandable needs of academia which you
mention. Perhaps both sides have stronger ideology than they're aware?


Tayssir
From: Mark Tarver
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129831370.699779.28820@o13g2000cwo.googlegroups.com>
> And the biggest criticism /I/ hear about Lisp is completely the
> opposite -- the "CADDR style" of programming, the very pretentious,
> aesthetic, ivory tower worldview they think Lisp naturally has. This
> does conflict with the understandable needs of academia which you
> mention. Perhaps both sides have stronger ideology than they're aware?

This other view would represent the student body.
Their view is pragmatic: they want to learn languages
that will land them jobs.  Or languages which they
can do cool apps.

Lisp is not cool from this point of view.  The
trouble is that Lisp out of the box just
does not support so many of the things
students want to do - web based apps, GUIs.
Visual Basic is more immediately appealing.
Never mind that under the chrome is a
puttering 500 cc engine and Lisp is a V8.
Chrome sells.

The response of the Lisp community is
"But this has been done, look at Joe
Blow's implementation of X in Lisp"
where X is whatever feature you want.
The trouble is that its not the same
as a standard in-the-box component.
Too many Xs are hacks of varying quality,
generally not documented and not part
part of the CL language, written by people who
have moved on to other things and
possibly not maintained or bug-consistent.
You're not likely to get a job programming
in Joe Blow's X.

So Lisp gets hit from both ends - from profs
who dislike it for the reasons I've stated
i.e. it allows poor programming style and
misses modern FP features. And from students
who don't like it because it misses the
features they want to write killer apps.

I wrote Qi from the prof end - to put into
Lisp what was sorely needed - modern
pattern matching and types and a semantics
to support it.   It makes Lisp fully the equal
to modern FPs and in many ways more powerful.

But would the students be impressed?  No.
What is needed is integration into the environment
of the machine (that does not mean Linux alone)
so they can do those cool apps.  That is what the
IFPE project is about on www.lambdassociates.org.
To date I've made some progress integrating
Qi with TCL/Tk to produce Qi/Tk.

If Qi/Tk is finished, it will be an answer to both
profs and students.  Therefore it must catch
on - right?  Well, maybe not.  The reasons here
are sociological.

Functional programming in the UK
remains concentrated in a few universities;
Edinburgh and Glasgow being two.   They
have a heavy investment in ML or Haskell
and its not politically wise for people who
have made a large financial commitment
to something to adopt or even flirt with
any other approach.

Maybe we will see bolder spirits, grad students,
Ph.D. students, using
Qi to break new ground.  This will happen before
any adoption of Qi at UG level.  But I don't
expect any fast ground shift over in the UK -
its pretty conservative here.

More generally, there are 3 conditions which
make a language successful.

1.  The language embodies some new paradigm.

2.  It has the support of a powerful institution.

3.  It meets an identifiable need for which there
     is already a potential user base.

In order of importance its 2., 3. and 1. last.

Qi has only condition 1.  Regarding 2. Qi actually
allows you to venture outside the box in many
ways - hence my study series.  But - for many
programmers who have worked for years in their
box, that is as far as their imagination goes.
They're not interested in being outside the box
because their attitude is "Hey, where's the box?".
Their language defines what they can imagine
and hence what they desire.

The thing that would give Qi a big shot in the arm
would be 2.   If MIT, Microsoft or another big institution
picked Qi/Tk up and ran with it, it would change the
landscape - particularly Microsoft would really
steal the ball if they decided to use something like
this inside Windows.

Mark
From: ·············@gmail.com
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129839729.235885.30550@g49g2000cwa.googlegroups.com>
> The response of the Lisp community is
> "But this has been done, look at Joe
> Blow's implementation of X in Lisp"
> where X is whatever feature you want.
> The trouble is that its not the same
> as a standard in-the-box component.
> Too many Xs are hacks of varying quality,
> generally not documented and not part
> part of the CL language, written by people who
> have moved on to other things and
> possibly not maintained or bug-consistent.

Speaking as a newcomer to lisp, I dont think that characterization is
accurate.  How does this situation compare to CPAN or ruby gems?
Hacks, varying documentation quality, possibly no longer maintained,
not part of the language. . .  all of that doesnt seem to cause too
much trouble as long as there is a canonical source with a critical
mass of libraries.

Would there be any 'trouble' if the community's response could usually
be:
"But this has already been done, just (asdf-install:install
'joe-blows-X)"
instead of
"But this has already been done, track down joe blow's X off of cliki
or his webpage and then see if it works in your implementation."

> You're not likely to get a job programming
> in Joe Blow's X.

How about a job programming in David Heinemeier Hansson's X?  Or Jesse
James Garrett's Buzzword? Hey, look, cl-ajax is even asdf-installable,
i wonder how well it works . . .
From: Raffael Cavallaro
Subject: Re: CL failure stories?
Date: 
Message-ID: <2005102017252875249%raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2005-10-20 14:02:50 -0400, "Mark Tarver" <··········@ukonline.co.uk> said:

> More generally, there are 3 conditions which
> make a language successful.
> 
> 1.  The language embodies some new paradigm.
> 
> 2.  It has the support of a powerful institution.
> 
> 3.  It meets an identifiable need for which there
>      is already a potential user base.
> 
> In order of importance its 2., 3. and 1. last.
> 
> Qi has only condition 1.  Regarding 2. Qi actually
> allows you to venture outside the box in many
> ways - hence my study series.  But - for many
> programmers who have worked for years in their
> box, that is as far as their imagination goes.
> They're not interested in being outside the box
> because their attitude is "Hey, where's the box?".
> Their language defines what they can imagine
> and hence what they desire.
> 
> The thing that would give Qi a big shot in the arm
> would be 2.   If MIT, Microsoft or another big institution
> picked Qi/Tk up and ran with it, it would change the
> landscape - particularly Microsoft would really
> steal the ball if they decided to use something like
> this inside Windows.

This is exceedingly unlikely to happen as long as Qi is licensed under 
the GPL. You yourself have stated that an important factor in 
attracting attention is the perception that one could use the new 
technology to get a job. The GPL is an extremely limiting license for a 
technology that hopes to be used in the commercial sphere - it 
effectively prevents distribution of all but open source projects.

This alone means that GPLed projects won't even get a second look from 
large commercial institutions - they don't want to be backed into a 
corner should the need arise to distribute something that has 
previously only been used internally. They absolutely will not adopt 
something that requires them to release the source code of their unique 
commercial offerings (note that I specifically exclude 
yet-another-implementation-of-x infrastructure here). Certainly 
Microsoft is never going to think seriously about using a GPLed 
technology. The LLGPL or an MIT or BSD style license would be a much 
better choice if you are serious about this direction for Qi.

regards,

Ralph
From: Mark Tarver
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129849505.180311.191680@f14g2000cwb.googlegroups.com>
Well thats certainly a thought to bear in mind.

thanks

Mark
From: Geoffrey Summerhayes
Subject: Re: CL failure stories?
Date: 
Message-ID: <OeR5f.11750$ns3.1003628@news20.bellglobal.com>
"Mark Tarver" <··········@ukonline.co.uk> wrote in message 
·····························@g47g2000cwa.googlegroups.com...
>
> Neither Field, Harrison nor myself are daft enough to consider
> that CL as it currently stands is a pure functional language.
> Nor are we daft enough to say, as Mr Summerhayes says,
> that it is not a functional language at all.  We regard it as a
> functional language with a heavy accretion of procedural features.
> On this matter, I don't think more space need be wasted,
> since these facts are well known to everybody in the Lisp
> community.
>

I feel like being daft for just one more post. :-)

Admittedly, a functional language has to make *some*
concessions to be useful, but CL goes a lot farther.
CL has more than just a 'heavy accretion', it allows for
writing a program procedurally or object-oriented with
about as much functional bits as a standard C program.

If you are happy classifying the whole as the part, fine.

Personally, I think it's like saying {1 2 3 4 5 6 7 9}
is the set of all positive odd integers below 10, with a
heavy accretion of evens. Hey, it's mostly odds and the
original intention may have been to hold just the odd
numbers. (I know, you don't get the analogy here either.)

The power of CL comes from the fact that my hands
are not tied to one way of doing things. I can write
a system that contains: procedural, functional,
object-oriented, and with a couple of additional CL defs,
logical sections, interconnected, all without switching
languages. That's power, pure and simple.

So CL is a functional language.
Also, CL is a procedural language.
And, CL is an object-oriented language.
And, CL is a language construction set.
And, CL will likely be the 'style-of-the-month' language,
whatever it turns out to be.

Happy now?

--
Geoff 
From:  (typep 'nil '(satisfies identity)) => ?
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129966218.184849.13660@g14g2000cwa.googlegroups.com>
I on my own hope, that lisp will never become a popular language. the
lispers would then become average programmers.
... and I'm sure that our arrogant behaviour will win!
;-))
From: Alan Crowe
Subject: Re: CL failure stories?
Date: 
Message-ID: <868xwp3eyq.fsf@cawtech.freeserve.co.uk>
"Mark Tarver" <··········@ukonline.co.uk> writes:
> The battle for Lisp is really lost in
> education.

See me fighting for Lisp in education on slashdot

http://slashdot.org/comments.pl?sid=158755&cid=13304138

The lack of vision in higher education worries me. What
should a biology undergraduate learn? The key idea in
biology is evolution, so maybe the center piece of the
course should be learning a programming language and writing
a genetic algorithm thingy of their own and playing with it
for a while.

That way, biology undergraduates leave university with their own
personnal understanding of the key mechanism in their field.

Unfortunately discussions of programming languages seem to
be trapped within a much narrow vision in which computer
science undergraduates are prepared for programming jobs.

Alan Crowe
Edinburgh
Scotland
From: Vesa Karvonen
Subject: Re: CL failure stories?
Date: 
Message-ID: <dj5p79$6tp$1@oravannahka.helsinki.fi>
torve <···········@gmail.com> wrote:
[...]
> Anyway, most answers here do not really answer my question. I was
> actually looking for good technical reasons why one might want to
> choose a different language besides Lisp.
[...]

Maybe you are asking in the wrong group. You might get more answers by
asking from people who have switched from CL to some other language.

-Vesa Karvonen
From: Ulrich Hobelmann
Subject: Re: CL failure stories?
Date: 
Message-ID: <3rn2ebFkfsemU1@individual.net>
Alan Crowe wrote:
> "Mark Tarver" <··········@ukonline.co.uk> writes:
>> The battle for Lisp is really lost in
>> education.
> 
> See me fighting for Lisp in education on slashdot
> 
> http://slashdot.org/comments.pl?sid=158755&cid=13304138
> 
> The lack of vision in higher education worries me. What
> should a biology undergraduate learn? The key idea in
> biology is evolution, so maybe the center piece of the
> course should be learning a programming language and writing
> a genetic algorithm thingy of their own and playing with it
> for a while.

But *that* is a myth.  Everybody knows that humans aren't apes, God 
created all the stuff, burning down cities is good, yada yada...

> That way, biology undergraduates leave university with their own
> personnal understanding of the key mechanism in their field.
> 
> Unfortunately discussions of programming languages seem to
> be trapped within a much narrow vision in which computer
> science undergraduates are prepared for programming jobs.

Computer science is as much dominated by religious beliefs (that 
includes politics) as the rest of the world.  Impossible to have a 
conversation with those people until they see it themselves.

"If you don't know what Jazz is..."

-- 
Blessed are the young for they shall inherit the national debt.
	Herbert Hoover
From: [Invalid-From-Line]
Subject: Re: CL failure stories?
Date: 
Message-ID: <dj5q6j$lbr$1@news.sap-ag.de>
Hello,
"torve" <···········@gmail.com> wrote in message
> Do you have encountered serious problems that made
> you abandon a project or wish to have chosen a different language
> (anything, Perl, Haskell, C++)?
I think it is difficult to attribute failure to language choice, so
I will answer the "choose a different language" part.

In the past I have done most of my experimental programming in common lisp.
Unfortunately most of my experimental programming (combinatoric/bayesian
search) uses complex algorithms and requires getting the most speed out of
the hardware is possible (I spend time optimising) or changing the program
to use new data structures and algorithms.

I have found common lisp to be better than most other programming languages
I have used for this sort of thing.

I do have a number of problems though when using it.

1) Keeping track of mutable vs unmutables parts of data structures. Do I
need to mutate here, or do I need to construct a new changed copy.
2) Composing functions works, and composing macros works (i.e. passing a
function to a function, passing a macro to a macro), but passing a macro to
a function doesn't work. This means that if I change a function to macro to
get different semantics, I need to change all associated functions to macros
as well. Yuck.
3) I find I need to have a fairly detailed overview of a lisp program in
order to reorganize it, change its data structures and algorithms. Keeping
such a good overview becomes difficult.

Over the last 6 months I have been giving Haskell a go.

As it:
1) Keeps track of state for you.
2) Lazy evaluations mean that you don't need to use macros to get changes in
semantics.
3) Using the type system you can set up constraints that your program must
obey, and Haskell programs are automatically very modular. You don't need a
good understanding of the whole program in order to make aggressive changes.
4) Gives a good notation for experimenting with  different data structures

Learning Haskell was at times very difficult, though I think with the right
teaching materials it could be made fairly painless. In places you need to
learn the "effective mental attitudes" that allow you to use Haskell well.

I'm still learning, but I have found learning Haskell rewarding and I think
I will be using it more in future, and common lisp less.

Rene.
From: Pascal Bourguignon
Subject: Re: CL failure stories?
Date: 
Message-ID: <874q7d5pxx.fsf@thalassa.informatimago.com>
<··············@hotmail.com> writes:
> I do have a number of problems though when using it.
>
> 1) Keeping track of mutable vs unmutables parts of data structures. Do I
> need to mutate here, or do I need to construct a new changed copy.
> 2) Composing functions works, and composing macros works (i.e. passing a
> function to a function, passing a macro to a macro), but passing a macro to
> a function doesn't work. This means that if I change a function to macro to
> get different semantics, I need to change all associated functions to macros
> as well. Yuck.
> 3) I find I need to have a fairly detailed overview of a lisp program in
> order to reorganize it, change its data structures and algorithms. Keeping
> such a good overview becomes difficult.

1 & 3 are not specific to lisp.  You have the same problem with other
languages.  You can solve them in the same way, using CLOS.

If you get a char* in Objective-C, you don't know if it's mutable or
not.  If you get a NSString* you know it's not mutable, and if you get
a NSMutableString* you know it's mutable.  You can do the same in CLOS.


> Over the last 6 months I have been giving Haskell a go.
>
> As it:
> 1) Keeps track of state for you.
> 2) Lazy evaluations mean that you don't need to use macros to get changes in
> semantics.
> 3) Using the type system you can set up constraints that your program must
> obey, and Haskell programs are automatically very modular. You don't need a
> good understanding of the whole program in order to make aggressive changes.
> 4) Gives a good notation for experimenting with  different data structures

All right you need a straightjacket because you're a disorganized
person.  Haskell will do good for you. ;-)


> Learning Haskell was at times very difficult, though I think with the right
> teaching materials it could be made fairly painless. In places you need to
> learn the "effective mental attitudes" that allow you to use Haskell well.

What I said. :-)

> I'm still learning, but I have found learning Haskell rewarding and I think
> I will be using it more in future, and common lisp less.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
The rule for today:
Touch my tail, I shred your hand.
New rule tomorrow.
From: ··············@hotmail.com
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129754667.871697.206340@g43g2000cwa.googlegroups.com>
torve wrote:
> But thats exactly why I posed my original question. I'm a bit dumb and
> make mistakes very easily, especially when a program gets larger -
> there must be something broken with the complexity of large programs...
> So if Haskell would make it possible for me to write non-trivial,
> rather error free programs despite my intellectual shortcoming, and
> without restricting the expressiveness i need, i should invest time
> into learning it.

So you are talking about a one-man project here? Because your original
flame-bait didn't mention that. Most of the success stories you read
are about *teams* of programmers attacking problems that typically
require larger teams in conventional languages. Not single programmers
doing their own thing. Failures only become famous when they involve
enough people, and are usually only loosely related to the
implementation language. Except for trolls on newsgroups who love to
blame their personal failure on a computer language.
From: Cameron MacKinnon
Subject: Re: CL failure stories?
Date: 
Message-ID: <Q8ednfbueOVeLcveRVn-jw@rogers.com>
torve wrote:
> But thats exactly why I posed my original question. I'm a bit dumb and
> make mistakes very easily, especially when a program gets larger -
> there must be something broken with the complexity of large programs...
> So if Haskell would make it possible for me to write non-trivial,
> rather error free programs despite my intellectual shortcoming, and
> without restricting the expressiveness i need, i should invest time
> into learning it.

How many non-trivial programs have you studied? People who want to write 
screenplays watch lots of films and people who want to write books spend 
a lot of time reading them. Why do people who want to write software so 
often spend so little time reading great software, or worse yet, look at 
toy examples in textbooks and believe that large programs are just small 
programs that got big?

Your attitude seems quite strange; rather more like a manager looking 
for a "silver bullet" than a programmer. If I may offer an analogy, 
Henry Ford's adoption of the assembly line meant that instead of a small 
number of craftsmen making cars slowly, a larger number of 
lesser-skilled people were able to make cars quickly. But I don't think 
it was the craftsmen who were enthusiastic about this.

Different languages do create opportunities to make different kinds of 
mistakes, but those types of mistakes (type errors, for example) aren't 
the things that prevent people from building large scale software. Smart 
people have built amazing systems in assembly language and just about 
every language invented since. With the benefit of hindsight, we can 
often see them struggling to invent the constructs which would become 
common in later languages, but they succeeded nonetheless.

Programming in the large requires discipline that you're unlikely to 
find in a magic programming language, but which comes easily to the well 
trained mind with a pencil and paper. "Fifth generation Languages," all 
the rage two decades ago, were going to solve the programmer 
productivity problem, based on pretty much exactly what you've posted: 
If we can find the right language, we can create an assembly line of 
middle-skilled people producing software which formerly required highly 
skilled craftsmen to produce.

There are books on the architecture of large computer programs, and of 
course there are the popular large systems themselves, many of which are 
available in source code form on the 'net. Study them and, if you have 
the innate talent, you'll be able to design large scale systems in any 
language.
From: Cruise Director
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129796186.267707.296350@g44g2000cwa.googlegroups.com>
Cameron MacKinnon wrote:
> torve wrote:
> > But thats exactly why I posed my original question. I'm a bit dumb and
> > make mistakes very easily, especially when a program gets larger -
> > there must be something broken with the complexity of large programs...
> > So if Haskell would make it possible for me to write non-trivial,
> > rather error free programs despite my intellectual shortcoming, and
> > without restricting the expressiveness i need, i should invest time
> > into learning it.
>
> How many non-trivial programs have you studied? People who want to write
> screenplays watch lots of films and people who want to write books spend
> a lot of time reading them. Why do people who want to write software so
> often spend so little time reading great software, or worse yet, look at
> toy examples in textbooks and believe that large programs are just small
> programs that got big?

How many screenplays have you dissected, in a non-trivial way?  Perhaps
you're exaggerating the amount of effort people put into it compared to
software problems.  Most industrial screenplays are in Three Act
Structure and fairly easy to decode.  Programs have arbitrarily complex
structure, they're hard to decode.


Cheers,
Brandon J. Van Every
    (cruise (director (of SeaFunc)
            '(Seattle Functional Programmers)))
http://groups.yahoo.com/group/SeaFunc
From: ·@b.c
Subject: Re: CL failure stories?
Date: 
Message-ID: <8vnel15070g8c23e9vqs68fj7386eensoa@4ax.com>
On Wed, 19 Oct 2005 16:46:11 -0400, Cameron MacKinnon
<··········@clearspot.net> wrote:

>Henry Ford's adoption of the assembly line meant that instead of a small 

The Ford assembly line is not a good analogy for programming.  Cars
get invented, designed, and manufactured.  Manufacture of programs
consists of copying them to CD's to be shipped to customers.
From: Cameron MacKinnon
Subject: Re: CL failure stories?
Date: 
Message-ID: <NoOdnfNnSohGMsreRVn-qg@rogers.com>
·@b.c wrote:
> On Wed, 19 Oct 2005 16:46:11 -0400, Cameron MacKinnon
> <··········@clearspot.net> wrote:
> 
> 
>>Henry Ford's adoption of the assembly line meant that instead of a small 
> 
> 
> The Ford assembly line is not a good analogy for programming.  Cars
> get invented, designed, and manufactured.  Manufacture of programs
> consists of copying them to CD's to be shipped to customers.

It is unfortunate that you received the impression that I thought it was 
a good analogy. I was trying to point out that a lot of companies have 
sold a lot of technology to IT management over the years using basically 
that analogy. There's immense appeal to a classically-trained manager in 
standardizing a process so as to get a known consistent throughput, and 
vendors capitalized on that desire. Rather than admitting the obvious -- 
thet there are vast productivity differences among programmers, and that 
even individuals tend to have daily variance in their productivity -- 
they sold systems that they claimed any fool (more or less) could 
program. When SQL was invented, the target market wasn't programmers; 
end users were expected to formulate their queries in it, freeing them 
from the burden of needing programmers for the simple stuff.
From: Tim X
Subject: Re: CL failure stories?
Date: 
Message-ID: <873bmtop4c.fsf@tiger.rapttech.com.au>
·@b.c writes:

> On Wed, 19 Oct 2005 16:46:11 -0400, Cameron MacKinnon
> <··········@clearspot.net> wrote:
> 
> >Henry Ford's adoption of the assembly line meant that instead of a small 
> 
> The Ford assembly line is not a good analogy for programming.  Cars
> get invented, designed, and manufactured.  Manufacture of programs
> consists of copying them to CD's to be shipped to customers.
> 

I'd go one step further and argue that it is due to management
treating programmers as if they were assembly line workers is why we
often have failures in projects. Its all too common that management
believes success comes from sticking a whole bunch of programmers in a
room, tellinig them to produce x lines of code a day and have the
underlying belief that each programmer is interchangable. 

Tim
-- 
Tim Cross
The e-mail address on this message is FALSE (obviously!). My real e-mail is
to a company in Australia called rapttech and my login is tcross - if you 
really need to send mail, you should be able to work it out!
From: Cameron MacKinnon
Subject: Re: CL failure stories?
Date: 
Message-ID: <_aWdnYrO_qDkcMreRVn-gw@rogers.com>
torve wrote:
> It seems to harder to be subtle in a foreign language than i thought:)

With that email address, I doubt that all your difficulties stem from 
translation.

> I think i do know how large systems work. However, no matter the size
> of a project I would like to minimize the amount of work I have to
> invest while at the same time maximize the complexity of the solutions
> I or a team that I am part of can tackle. I had my share of assembly
> language, but I dont want to look back at my life some day and be proud
> of the insane amounts of hand-crafted (for you germans: mundgeblasen)
> x86 mnemonics I created - look, another raytracer, and it took me just
> 12 years.

I certainly wasn't suggesting that you use assembly language. But I've 
seen assembly programs which (using extensive macros) quickly climb from 
their humble roots and end up with almost as abstract a level of 
expression as programs written in high level languages.

> Of course you are right in your view of 5GLs, but that doesnt mean that
> e.g. language research is dead because "dumb people have no place in
> programming and smart people can write rocket guidance systems using
> only two dipswitches and a pocketknife". I think it is equally clear
> (or should be in this forum) that morons claim the language into which
> they invested a few years of their life would be the end of all future
> development in programming languages. Of course Lisp has deficits -
> where's the built-in pattern matching, the standardized network
> interface, etc etc.

At other times, I've argued in favour of improving some of these 
deficits by mutual agreement among Common Lisp implementors. I've done 
this because I believe that relatively inexperienced programmers are 
helped when they are presented with a complete language with all the 
modern conveniences in place.

My expectations of a programmer (or team) capable of taking on a large 
project are different. If the project is large enough, writing code to 
grace your otherwise insufficient implementation language with pattern 
matching*, sockets etcetera will be a very small part of the job -- in 
fact, you could view it as a fun task to do on a day you don't feel like 
working too hard, and want a bit of a break. Conversely, if adding a 
feature or two to Lisp would be both required for your project and 
nontrivial in relation to it, maybe Lisp doesn't offer enough benefits 
over some other language to be worthwhile.

> To ask where these deficits are, especially the not-so obvious ones,
> strikes me as a better idea than becoming proficient (2+ years) in
> several languages and then deciding for myself.

You asked about projects where the failure could be attributed to the 
language. The response you seemed to like most was from someone who 
finds insurmountable problems with every language he's studied. [N.B. 
Brandon: Congratulations on finally changing that "under construction" 
page at www.indiegamedesign.com]

It isn't that your question wasn't valid, but people have been asking it 
for decades and aren't any closer to the answer. It's just too difficult 
and expensive to create two evenly matched teams of programmers and have 
them create the same application in two different languages to see which 
language is best. And when large projects fail, it's too convenient to 
blame the technologies rather than the people.



* - But see http://www.weitz.de/cl-ppcre/ for regular expressions which 
can be faster than Perl's, if that's what you meant by pattern matching. 
If you meant unification algorithms, they exist in several Lisp textbooks.
From: torve
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129905689.631929.117870@g49g2000cwa.googlegroups.com>
>I certainly wasn't suggesting that you use assembly language. But I've
>seen assembly programs which (using extensive macros) quickly climb from
>their humble roots and end up with almost as abstract a level of
>expression as programs written in high level languages.

Yes, but my question was not about feasibility but maximising
productivity.

And:

>project are different. If the project is large enough, writing code to
>grace your otherwise insufficient implementation language with pattern
>matching*, sockets etcetera will be a very small part of the job -- in

I don't think I can accept this argument - it would justify using a
macroassembler as a good, all-purpose language since it is always
possible to extend it until it is expressive enough, and since one can
implement all the libraries one needs - you just have to to this before
the _real_ start of the project. Also, even the largest project with
the most unusual requirements will profit from an existing
infrastructure of tools and - especially - concepts that were developed
through decade-long research. I do not see any reason why one should
duplicate the creation of such tools except for educational purposes.
(maybe I *really* should try harder to avoid ambiguities, I meant
"pattern matching" as for example the way ocaml expresses case
statements - which in my view enhances expressiveness significantly)

>It isn't that your question wasn't valid, but people have been asking it
>for decades and aren't any closer to the answer. It's just too difficult
>and expensive to create two evenly matched teams of programmers and have
>them create the same application in two different languages to see which
>language is best. And when large projects fail, it's too convenient to
>blame the technologies rather than the people.

Again, thats why I was asking for subjective views of "qualified"
programmers - where this qualification is - again subjectively - judged
by myself, starting with the selection of this forum. I think this is a
valid approach - if there is no agreed upon base for judgement one asks
people who might be competent and weeds out the obviously unqualified
answers.

Another point: I'm not part of a development team at the moment but am
consulting in networking and network-related areas, and my dabbling
with new programming languages, and paradigms, as two goals: first, to
help me construct tools for myself that replace the usual shell/perl
mix everybody else is using, and to see how high-level this can get -
think query languages/expert system shells, interactivly used and
grown, anything to escape the tedium of the IT industry. Doing these
experiments helps that in itself.
Second, I firmly believe that boundaries of our language are the
boundaries of our thinking and our capabilities (don't know whom i'm
paraphrasing at the moment) and that not only this is equally valid for
programming languages but that these concepts intermix - natural
language, prgramming languages, the concepts that we use in our jobs
and ultimately in relationship to the world.

So there.:) Kind of programming language research on a personal level.
From: Cameron MacKinnon
Subject: Re: CL failure stories?
Date: 
Message-ID: <wrqdndvtlccJjsTeRVn-vg@rogers.com>
torve wrote:
> Another point: I'm not part of a development team at the moment but am
> consulting in networking and network-related areas, and my dabbling
> with new programming languages, and paradigms, as two goals: first, to
> help me construct tools for myself that replace the usual shell/perl
> mix everybody else is using, and to see how high-level this can get -
> think query languages/expert system shells, interactivly used and
> grown...

You might be interested in Scsh, a Scheme system designed as a 
replacement for typical Unix shells.


> Second, I firmly believe that boundaries of our language are the
> boundaries of our thinking and our capabilities (don't know whom i'm
> paraphrasing at the moment)

That's the Sapir Whorf hypothesis, and I too believe it is applicable in 
computer science.
From: George Neuner
Subject: Re: CL failure stories?
Date: 
Message-ID: <2hgil1hiub1ra0tv20t312nb3sgeopb0p5@4ax.com>
On Fri, 21 Oct 2005 12:12:15 -0400, Cameron MacKinnon
<··········@clearspot.net> wrote:

>torve wrote:
>
>> Second, I firmly believe that boundaries of our language are the
>> boundaries of our thinking and our capabilities (don't know whom i'm
>> paraphrasing at the moment)
>
>That's the Sapir Whorf hypothesis, and I too believe it is applicable in 
>computer science.

Language affects how we think and understand, but we can imagine
things that no language is adequate to explain, therefore language is
not the boundary of thinking - though it seems to be an impediment to
understanding.

Similarly language can *only* be a boundary on capability when an idea
*must* be explained because it cannot be demonstrated.  Where an idea
can be demonstrated, it might be understood without knowledge of the
demonstrator's language.

George
--
for email reply remove "/" from address
From: Pascal Bourguignon
Subject: Re: CL failure stories?
Date: 
Message-ID: <878xwm4j8l.fsf@thalassa.informatimago.com>
George Neuner <·········@comcast.net> writes:

> On Fri, 21 Oct 2005 12:12:15 -0400, Cameron MacKinnon
> <··········@clearspot.net> wrote:
>
>>torve wrote:
>>
>>> Second, I firmly believe that boundaries of our language are the
>>> boundaries of our thinking and our capabilities (don't know whom i'm
>>> paraphrasing at the moment)
>>
>>That's the Sapir Whorf hypothesis, and I too believe it is applicable in 
>>computer science.

Perhaps literary guys have more difficulty to think out of the
language than math guys.  I've never heard a math guy say that, 
always literary guys.

> Language affects how we think and understand, but we can imagine
> things that no language is adequate to explain, therefore language is
> not the boundary of thinking - though it seems to be an impediment to
> understanding.
>
> Similarly language can *only* be a boundary on capability when an idea
> *must* be explained because it cannot be demonstrated.  Where an idea
> can be demonstrated, it might be understood without knowledge of the
> demonstrator's language.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Nobody can fix the economy.  Nobody can be trusted with their finger
on the button.  Nobody's perfect.  VOTE FOR NOBODY.
From: K. Ari Krupnikov
Subject: Re: CL failure stories?
Date: 
Message-ID: <86wtk6be1z.fsf@deb.lib.aero>
Pascal Bourguignon <····@mouse-potato.com> writes:

> George Neuner <·········@comcast.net> writes:
> 
> > On Fri, 21 Oct 2005 12:12:15 -0400, Cameron MacKinnon
> > <··········@clearspot.net> wrote:
> >
> >>torve wrote:
> >>
> >>> Second, I firmly believe that boundaries of our language are the
> >>> boundaries of our thinking and our capabilities (don't know whom i'm
> >>> paraphrasing at the moment)
> >>
> >>That's the Sapir Whorf hypothesis, and I too believe it is applicable in 
> >>computer science.
> 
> Perhaps literary guys have more difficulty to think out of the
> language than math guys.  I've never heard a math guy say that, 
> always literary guys.

That's because math guys invent their own languages every time they
hit a language boundary.

Ari.

-- 
Elections only count as free and trials as fair if you can lose money
betting on the outcome.
From: Cameron MacKinnon
Subject: Re: CL failure stories?
Date: 
Message-ID: <vJ-dncJ_1MXz6sTeRVn-pA@rogers.com>
Pascal Bourguignon wrote:
> George Neuner <·········@comcast.net> writes:
> 
> 
>>On Fri, 21 Oct 2005 12:12:15 -0400, Cameron MacKinnon
>><··········@clearspot.net> wrote:
>>
>>
>>>torve wrote:
>>>
>>>
>>>>Second, I firmly believe that boundaries of our language are the
>>>>boundaries of our thinking and our capabilities (don't know whom i'm
>>>>paraphrasing at the moment)
>>>
>>>That's the Sapir Whorf hypothesis, and I too believe it is applicable in 
>>>computer science.
> 
> 
> Perhaps literary guys have more difficulty to think out of the
> language than math guys.  I've never heard a math guy say that, 
> always literary guys.

But is it not said that during the early years of the calculus, people 
who used Newton's notation had a more difficult time expressing ideas 
than those who used Leibniz's notation?

>>Language affects how we think and understand, but we can imagine
>>things that no language is adequate to explain, therefore language is
>>not the boundary of thinking - though it seems to be an impediment to
>>understanding.

I'm having trouble imagining imagining something that I can't describe. 
Whether that's because I think I better descriptive powers than I 
actually do, or whether it represents a failure of my imagination I 
cannot say. I can understand the concept, though. Does that count, or do 
I have to ask for an example?  :-)
From: Mark Carter
Subject: Re: CL failure stories?
Date: 
Message-ID: <435a073c$0$41143$14726298@news.sunsite.dk>
Cameron MacKinnon wrote:
> Pascal Bourguignon wrote:
>  I've never heard a math guy say that, always 
> literary guys.

Agreed. The whole idea that you need a word for an idea in order to have 
that idea seems just plain guff. They're ignoring intuition, which is 
nonverbal.

I saw, on the telly, a documentary that said that the brain has some 
kind of precognitive action before the verbal thought. Also, some 
meditators try to remain in this pre-verbalisation stage. So you might 
say that, whilst Sapir Whorf found words very important, the rest of us 
could actually consider it to be a kind of waste product or no 
particular importance.

> I'm having trouble imagining imagining something that I can't describe. 
> Whether that's because I think I better descriptive powers than I 
> actually do, or whether it represents a failure of my imagination I 
> cannot say. 

There's a story involving Guass, arguably the greatest mathematician who 
ever lived. Someone complained that there would never be good 
discoveries in number theory because there were no good notations for 
primes (or something like that). To which Guass replied: "what he needs 
is good notions, not good notations".
From: David Steuber
Subject: Re: CL failure stories?
Date: 
Message-ID: <87u0f9cbei.fsf@david-steuber.com>
Mark Carter <··@privacy.net> writes:

> Agreed. The whole idea that you need a word for an idea in order to
> have that idea seems just plain guff. They're ignoring intuition,
> which is nonverbal.

Never mind the fact that the idea has to come before you can even
think up a word for it.

Cat's do a pretty good job of expressing their desire for you to open
the door so they can go out without having words at all.

-- 
http://www.david-steuber.com/
The UnBlog: An island of conformity in a sea of quirks.
The lowest click through rate in Google's AdSense program.
----------------------------------------------------------------------
From: Edi Weitz
Subject: Re: CL failure stories?
Date: 
Message-ID: <uy84mo6ez.fsf@agharta.de>
On Sat, 22 Oct 2005 00:07:22 +0200, Pascal Bourguignon <····@mouse-potato.com> wrote:

> George Neuner <·········@comcast.net> writes:
>
>>>That's the Sapir Whorf hypothesis, and I too believe it is
>>>applicable in computer science.
>
> Perhaps literary guys have more difficulty to think out of the
> language than math guys.  I've never heard a math guy say that,
> always literary guys.

What kind of guy is Wittgenstein then?

Cheers,
Edi.

-- 

Lisp is not dead, it just smells funny.

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Joe Marshall
Subject: Re: CL failure stories?
Date: 
Message-ID: <vezq49zi.fsf@alum.mit.edu>
Edi Weitz <········@agharta.de> writes:

> On Sat, 22 Oct 2005 00:07:22 +0200, Pascal Bourguignon <····@mouse-potato.com> wrote:
>
>> George Neuner <·········@comcast.net> writes:
>>
>>>>That's the Sapir Whorf hypothesis, and I too believe it is
>>>>applicable in computer science.
>>
>> Perhaps literary guys have more difficulty to think out of the
>> language than math guys.  I've never heard a math guy say that,
>> always literary guys.
>
> What kind of guy is Wittgenstein then?

I've heard Wittgenstein described as an `apophatic mysticist'.
From: Andreas Eder
Subject: Re: CL failure stories?
Date: 
Message-ID: <hbgs23-o09.ln1@eder.homelinux.net>
Edi Weitz <········@agharta.de> writes:

> On Sat, 22 Oct 2005 00:07:22 +0200, Pascal Bourguignon <····@mouse-potato.com> wrote:
>
>> George Neuner <·········@comcast.net> writes:
>>
>>>>That's the Sapir Whorf hypothesis, and I too believe it is
>>>>applicable in computer science.
>>
>> Perhaps literary guys have more difficulty to think out of the
>> language than math guys.  I've never heard a math guy say that,
>> always literary guys.
>
> What kind of guy is Wittgenstein then?

Maybe the problem lies in the different uses of 'language'?
For mathematicians (at least for me) mathematics +is+ a language and
therefor the Sapir Whorf thesis is kind of empty.
There are many more languages than just the natural languages.

And I think Wittgenstein had a similar concept of language.

Andreas
-- 
Wherever I lay my .emacs, there's my $HOME.
From: Ron Garret
Subject: Re: CL failure stories?
Date: 
Message-ID: <rNOSPAMon-E3A012.00140222102005@news.gha.chartermi.net>
In article <··································@4ax.com>,
 George Neuner <·········@comcast.net> wrote:

> we can imagine things that no language is adequate to explain

Like what?

;-)

rg
From: Curt
Subject: Re: CL failure stories?
Date: 
Message-ID: <slrndljvgh.3gm.curty@einstein.electron.net>
On 2005-10-22, Ron Garret <·········@flownet.com> wrote:
> In article <··································@4ax.com>,
>  George Neuner <·········@comcast.net> wrote:
>
>> we can imagine things that no language is adequate to explain
>
> Like what?

Like that.

> ;-)
>
> rg


-- 
"We've heard that a million monkeys at a million keyboards could produce the
complete works of Shakespeare; now, thanks to the Internet, we know that is not
true."  Robert Wilensky
From: Alan Crowe
Subject: Re: CL failure stories?
Date: 
Message-ID: <86mzl1wxee.fsf@cawtech.freeserve.co.uk>
George Neuner <·········@comcast.net> writes:
> Language affects how we think and understand, but we can imagine
> things that no language is adequate to explain, therefore language is
> not the boundary of thinking - though it seems to be an impediment to
> understanding.

This assumes unbounded rationality. In practise rationality
is bounded. For example, you get to think for 70 years and
then you die.

Under bounded rationality, merely making some thoughts easier
to think than others allows language to shape the space of
possible thoughts. One gets to think deeply about topics
that are well supported by one's language, but are confined
to superficiality on topics that can only be expressed in a
convoluted way.

Alan Crowe
The eastern big city in
the northern bit of the island off the coast of europe,
no, not the rain sodden one, the one to the east.
From: Raffael Cavallaro
Subject: Re: CL failure stories?
Date: 
Message-ID: <2005102223462750073%raffaelcavallaro@pasdespamsilvousplaitmaccom>
BTW, just in case there's anyone here who isn't aware of the fact, the 
Sapir-Whorf hypothesis, especially in its strong form, has been widely 
discredited by experimental evidence, including experiments with 
pre-verbal infants.

On 2005-10-22 08:26:17 -0400, Alan Crowe <····@cawtech.freeserve.co.uk> said:

> Under bounded rationality, merely making some thoughts easier
> to think than others allows language to shape the space of
> possible thoughts. One gets to think deeply about topics
> that are well supported by one's language, but are confined
> to superficiality on topics that can only be expressed in a
> convoluted way.

Not quite. It seems that we're all born with an innate ability to make 
a whole range of distinctions about our perceptions, but each natural 
language only supports a proper subset of this total universe of 
distinctions. We therefore learn to ignore those our language doesn't 
talk about. However, we retain the ability to recognize these 
distinctions as adults, we just don't talk about them easily in our 
*unaugmented* native language since it doesn't support them well.
<http://www.news.harvard.edu/gazette/2004/07.22/21-think.html>

The practical effect of this is that sub-cultures that need to 
recognize these ignored distinctions simply develop their own domain 
specific vocabulary for them. A good example is the relatively large 
number of words for different types of frozen water (i.e., snow and 
ice) in the languages of the inuit and aleut on the one hand, and the 
relative poverty of such vocabulary in English on the other. English 
speakers who need to make such distinctions, such as skiers, simply 
develop a vocabulary for it. It's just that this specialized vocabulary 
does not become part of the base language.

So we aren't forever confined to superficiality on topics our language 
doesn't support well - we simply develop new vocabulary, just as lisp 
macros allow the extension of the base language to accomodate new 
domain specific concepts ;^)

regards
From: ···············@yahoo.com
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129818361.585573.243060@g47g2000cwa.googlegroups.com>
You write "...getting the most speed out of
the hardware is possible (I spend time optimising)..."
My Lisp code has a lot of (declare (fixnum i))'s.  I get tired of
writing them,
but it does make a difference in speed.  Do you find that Haskell's
types
help with speed?
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <4357adfa$0$15042$ed2619ec@ptn-nntp-reader02.plus.net>
···············@yahoo.com wrote:
> You write "...getting the most speed out of
> the hardware is possible (I spend time optimising)..."
> My Lisp code has a lot of (declare (fixnum i))'s.  I get tired of
> writing them,
> but it does make a difference in speed.  Do you find that Haskell's
> types
> help with speed?

In SML you have the converse problem - it is fast and fixnum by default but
you can add ": IntInf" to get infinite precision. That is probably better
for most people.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: [Invalid-From-Line]
Subject: Re: CL failure stories?
Date: 
Message-ID: <dj8bml$7cp$1@news.sap-ag.de>
<···············@yahoo.com> wrote in message
·····························@g47g2000cwa.googlegroups.com...
> You write "...getting the most speed out of
> the hardware is possible (I spend time optimising)..."
> My Lisp code has a lot of (declare (fixnum i))'s.  I get tired of
> writing them,
> but it does make a difference in speed.  Do you find that Haskell's
> types
> help with speed?
>
I find this question difficult to understand.

I find that profilers and good algorithms and data structures help with
speed.

I find speed optimisation between common lisp and Haskell quite similar
(though there are natually differences). In Haskell you also need to declare
things of type Int if you want to use machine integers. You also need to add
strictness annotations in Haskell. I think it is probably about equal
between common lisp and Haskell.
( I find I also need to add type annotations in Haskell to prevent the type
inferencer getting confused in the case where a make a thinko in a new
function. I do not regret this as I have found the type annotations to be
usefull documentation when I later look at the code again).

In common lisp I ended up (after a few years of experience) of  encoding
peformance critical entities as fixnums (whether they were numbers or not).
(No consing, no work for the garbage collector, and with full optimisation a
lot of algorithms take place in CPU registers, or with direct indexing in
lookup tables
rather than calculating them at all)
What I found annoying in common lisp is that you cannot say in common lisp
is that this object is a Location (for example), but it is encoded as a
fixnum. And then should be printed as a location.

In Haskell you can say

newtype Location = Location Int

instance Show Location where
  show x = ... (insert you printer code here)

i.e. I have an object that is physically represented by a machine integer,
but it is logically a location, and should be displayed as such.
Dispatching is then also done on the logical type when the variable is of
that type (as in the example above, 'show' is a generic function in lisp
terms).

This doesn't make any difference regarding speed, but does a lot to improve
readability of values (both meanings).

Rene.
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <4356f240$0$6521$ed2619ec@ptn-nntp-reader01.plus.net>
Alan Crowe wrote:
> "Mark Tarver" <··········@ukonline.co.uk> writes:
>> The battle for Lisp is really lost in
>> education.
> 
> See me fighting for Lisp in education on slashdot
> 
> http://slashdot.org/comments.pl?sid=158755&cid=13304138

Here's your Lisp from that page:

(defparameter plus '+)
(defparameter times '*)
(defun differentiate (form variable)
    (typecase form
          (number 0)
          (symbol (if (eql form variable) 1 0))
          (list (case (first form)
              (+ (list plus
                   (differentiate (second form) variable)
                   (differentiate (third form) variable)))
              (* (list plus
                   (list times
                     (differentiate (second form) variable)
                     (third form))
                   (list times
                     (second form)
                     (differentiate (third form) variable))))))))

* (differentiate '(* x (* x x)) 'x)

(+ (* 1 (* X X)) (* X (+ (* 1 X) (* X 1))))

Here are the five equivalent rewrite rules with an extra one to handle
powers, written in Mathematica:

In[1]:= rules = {d[_?NumericQ, _] :> 0,
                 d[x_, x_]        :> 1,
                 d[_Symbol, _]    :> 0, 
                 d[u_ + v_, x_]   :> d[u, x] + d[v, x], 
                 d[u_ v_, x_]     :> u d[v, x] + v d[u, x],
                 d[u_ ^ v_, x_]   :> u^v (v d[u, x]/u + d[v, x] Log[u])};

In[2]:= d[x^3, x] //. rules
           2
Out[2]= 3 x

Here is the equivalent OCaml:

# let rec d : 'expr * string -> 'expr = function
    `Num _, _          -> `Num 0
  | `Var v, x when v=x -> `Num 1
  | `Var _, _          -> `Num 0
  | `Plus(e1, e2), x   -> `Plus(d(e1, x), d(e2, x))
  | `Times(e1, e2), x  -> `Plus(`Times(e1, d(e2, x)),
                                `Times(e2, d(e1, x)));; 
val d :
  ([< `Num of int | `Plus of 'a * 'a | `Times of 'a * 'a | `Var of string
    > `Num `Plus `Times ] as 'a) * string -> 'a = <fun>

# d(`Times(`Var "x", `Times(`Var "x", `Var "x")), "x");;
- : [ `Num of int | `Plus of 'a * 'a | `Times of 'a * 'a | `Var ofstring ]
    as 'a
=
`Plus
  (`Times
     (`Var "x", `Plus (`Times (`Var "x", `Num 1),
                       `Times (`Var "x", `Num 1))),
   `Times (`Times (`Var "x", `Var "x"), `Num 1))

Here is the equivalent SML:

- datatype expr =
      Num of int
    | Var of string
    | Plus of expr * expr
    | Times of expr * expr;
> New type names: =expr
  datatype expr =
  (expr,
   {con Num : int -> expr,
    con Plus : expr * expr -> expr,
    con Times : expr * expr -> expr,
    con Var : string -> expr})
  con Num = fn : int -> expr
  con Plus = fn : expr * expr -> expr
  con Times = fn : expr * expr -> expr
  con Var = fn : string -> expr

- fun d(Num _, _)         = Num 0
    | d(Var v, x)         = Num (if v=x then 1 else 0)
    | d(Plus(e1, e2), x)  = Plus(d(e1, x), d(e2, x))
    | d(Times(e1, e2), x) = Plus(Times(e1, d(e2, x)), Times(e2, d(e1, x)));
> val d = fn : expr * string -> expr

- d(Times(Var "x", Times(Var "x", Var "x")), "x");;
> val it =
    Plus(Times(Var "x", Plus(Times(Var "x", Num 1), Times(Var "x", Num1))),
         Times(Times(Var "x", Var "x"), Num 1)) : expr

Here is the equivalent Haskell:

data Expr =
      Num Int
    | Var String
    | Add Expr Expr
    | Mul Expr Expr

d (Num n) x = Num 0
d (Var y) x | x==y = Num 1
            | x/=y = Num 0
d (Plus e1 e) x = Plus (d e1 x) (d e2 x)
d (Times e1 e2) x = Plus (Times e1 (d e2 x)) (Times (d e1 x) e2)

d (Times (Var "x") (Times (Var "x") (Var "x"))) "x"

Given that all of these implementations are shorter and clearer than your
Lisp, can you please explain why you are advocating Lisp for this task?

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Surendra Singhi
Subject: Re: CL failure stories?
Date: 
Message-ID: <oe5khii7.fsf@netscape.net>
Jon Harrop <······@jdh30.plus.com> writes:

> Alan Crowe wrote:
>> "Mark Tarver" <··········@ukonline.co.uk> writes:
>>> The battle for Lisp is really lost in
>>> education.
>> 
>> See me fighting for Lisp in education on slashdot
>> 
>> http://slashdot.org/comments.pl?sid=158755&cid=13304138

> Given that all of these implementations are shorter and clearer than your
> Lisp, can you please explain why you are advocating Lisp for this task?

My explanation, lisp is like natural language and much easier to read, if you
had written the program in Haskell literate script it would have been more
equal comparison.

:)

-- 
Surendra Singhi
http://www.public.asu.edu/~sksinghi/index.html

,----
| "O thou my friend! The prosperity of Crime is like unto the lightning,
| whose traitorous brilliancies embellish the atmosphere but for an
| instant, in order to hurl into death's very depths the luckless one
| they have dazzled." -- Marquis de Sade
`----
From: Pisin Bootvong
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129795622.242816.208620@f14g2000cwb.googlegroups.com>
Jon Harrop เขียน:
> Alan Crowe wrote:
> > "Mark Tarver" <··········@ukonline.co.uk> writes:
> >> The battle for Lisp is really lost in
> >> education.
> >
> > See me fighting for Lisp in education on slashdot
> >
> > http://slashdot.org/comments.pl?sid=158755&cid=13304138
>
> Here's your Lisp from that page:
>
> (defparameter plus '+)
> (defparameter times '*)
> (defun differentiate (form variable)
>     (typecase form
>           (number 0)
>           (symbol (if (eql form variable) 1 0))
>           (list (case (first form)
>               (+ (list plus
>                    (differentiate (second form) variable)
>                    (differentiate (third form) variable)))
>               (* (list plus
>                    (list times
>                      (differentiate (second form) variable)
>                      (third form))
>                    (list times
>                      (second form)
>                      (differentiate (third form) variable))))))))
>
> * (differentiate '(* x (* x x)) 'x)
>
> (+ (* 1 (* X X)) (* X (+ (* 1 X) (* X 1))))
>
> Here are the five equivalent rewrite rules with an extra one to handle
> powers, written in Mathematica:
>
> In[1]:= rules = {d[_?NumericQ, _] :> 0,
>                  d[x_, x_]        :> 1,
>                  d[_Symbol, _]    :> 0,
>                  d[u_ + v_, x_]   :> d[u, x] + d[v, x],
>                  d[u_ v_, x_]     :> u d[v, x] + v d[u, x],
>                  d[u_ ^ v_, x_]   :> u^v (v d[u, x]/u + d[v, x] Log[u])};
>
> In[2]:= d[x^3, x] //. rules
>            2
> Out[2]= 3 x
>
> Here is the equivalent OCaml:
>
> # let rec d : 'expr * string -> 'expr = function
>     `Num _, _          -> `Num 0
>   | `Var v, x when v=x -> `Num 1
>   | `Var _, _          -> `Num 0
>   | `Plus(e1, e2), x   -> `Plus(d(e1, x), d(e2, x))
>   | `Times(e1, e2), x  -> `Plus(`Times(e1, d(e2, x)),
>                                 `Times(e2, d(e1, x)));;
> val d :
>   ([< `Num of int | `Plus of 'a * 'a | `Times of 'a * 'a | `Var of string
>     > `Num `Plus `Times ] as 'a) * string -> 'a = <fun>
>
> # d(`Times(`Var "x", `Times(`Var "x", `Var "x")), "x");;
> - : [ `Num of int | `Plus of 'a * 'a | `Times of 'a * 'a | `Var ofstring ]
>     as 'a
> =
> `Plus
>   (`Times
>      (`Var "x", `Plus (`Times (`Var "x", `Num 1),
>                        `Times (`Var "x", `Num 1))),
>    `Times (`Times (`Var "x", `Var "x"), `Num 1))
>
> Here is the equivalent SML:
>
> - datatype expr =
>       Num of int
>     | Var of string
>     | Plus of expr * expr
>     | Times of expr * expr;
> > New type names: =expr
>   datatype expr =
>   (expr,
>    {con Num : int -> expr,
>     con Plus : expr * expr -> expr,
>     con Times : expr * expr -> expr,
>     con Var : string -> expr})
>   con Num = fn : int -> expr
>   con Plus = fn : expr * expr -> expr
>   con Times = fn : expr * expr -> expr
>   con Var = fn : string -> expr
>
> - fun d(Num _, _)         = Num 0
>     | d(Var v, x)         = Num (if v=x then 1 else 0)
>     | d(Plus(e1, e2), x)  = Plus(d(e1, x), d(e2, x))
>     | d(Times(e1, e2), x) = Plus(Times(e1, d(e2, x)), Times(e2, d(e1, x)));
> > val d = fn : expr * string -> expr
>
> - d(Times(Var "x", Times(Var "x", Var "x")), "x");;
> > val it =
>     Plus(Times(Var "x", Plus(Times(Var "x", Num 1), Times(Var "x", Num1))),
>          Times(Times(Var "x", Var "x"), Num 1)) : expr
>
> Here is the equivalent Haskell:
>
> data Expr =
>       Num Int
>     | Var String
>     | Add Expr Expr
>     | Mul Expr Expr
>
> d (Num n) x = Num 0
> d (Var y) x | x==y = Num 1
>             | x/=y = Num 0
> d (Plus e1 e) x = Plus (d e1 x) (d e2 x)
> d (Times e1 e2) x = Plus (Times e1 (d e2 x)) (Times (d e1 x) e2)
>
> d (Times (Var "x") (Times (Var "x") (Var "x"))) "x"
>
> Given that all of these implementations are shorter and clearer than your
> Lisp, can you please explain why you are advocating Lisp for this task?
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> http://www.ffconsultancy.com

By:
 1.) Reduce meaningful variable names ('variable', 'form') to ('v',
'expr'), which could be either good or bad depending on the context.
 2.) Use generic function to make pattern matching clearer.
 3.) Make method returning trivial expression a one-liner. It doesn't
break the style too much and still is readable.
 3.) Use Backquote for list construction, so that the result return
form looks clearer.
 4.) Optionally, Use '+ and '* as symbol directly without defparameter.
It's possible to put quoted '+' in returned expression and IMO it is
clearer that way. Mathematica version also does this.

(defmethod d ((n number) var) 0)
(defmethod d ((x symbol) var) (if (eql x var) 1 0))
(defmethod d ((expr list) var)
  (destructuring-bind (op e1 e2) expr
    (case op
      (+ `(+ (d e1 var) (d e2 var)))
      (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))

That's 7 line of code.

Out of these languages, only Lisp seems to allow you to directly
evaluate the derivative's result. I don't know about Mathematica, but
for other solution it doens't seem so.

You can do this in Lisp:

CL-USER> (d '(* x x) 'x)
 ==> (+ (* 1 X) (* X 1))
CL-USER> (eval `(let ((x 2)) ,*))
 ==> 4

I think there is also eval in Haskell and others, but I would like to
see how the code will change regarding the added requirement.
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <43576305$0$49780$ed2e19e4@ptn-nntp-reader04.plus.net>
Pisin Bootvong wrote:
> By:
>  1.) Reduce meaningful variable names ('variable', 'form') to ('v',
> 'expr'), which could be either good or bad depending on the context.
>  2.) Use generic function to make pattern matching clearer.
>  3.) Make method returning trivial expression a one-liner. It doesn't
> break the style too much and still is readable.
>  3.) Use Backquote for list construction, so that the result return
> form looks clearer.
>  4.) Optionally, Use '+ and '* as symbol directly without defparameter.
> It's possible to put quoted '+' in returned expression and IMO it is
> clearer that way. Mathematica version also does this.
> 
> (defmethod d ((n number) var) 0)
> (defmethod d ((x symbol) var) (if (eql x var) 1 0))
> (defmethod d ((expr list) var)
>   (destructuring-bind (op e1 e2) expr
>     (case op
>       (+ `(+ (d e1 var) (d e2 var)))
>       (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
> 
> That's 7 line of code.

That's vastly better than the original Lisp code!

> Out of these languages, only Lisp seems to allow you to directly
> evaluate the derivative's result. I don't know about Mathematica, but
> for other solution it doens't seem so.

Mathematica already did. :-)

> You can do this in Lisp:
> 
> CL-USER> (d '(* x x) 'x)
>  ==> (+ (* 1 X) (* X 1))
> CL-USER> (eval `(let ((x 2)) ,*))
>  ==> 4
> 
> I think there is also eval in Haskell and others, but I would like to
> see how the code will change regarding the added requirement.

In SML and OCaml you would write eval yourself. For example, in OCaml:

# let rec eval state = function
    | `Num n -> n
    | `Var v -> List.assoc v state
    | `Plus(e1, e2) -> eval state e1 + eval state e2
    | `Times(e1, e2) -> eval state e1 * eval state e2;;
val eval :
  ('a * int) list ->
  ([< `Num of int | `Plus of 'b * 'b | `Times of 'b * 'b
    | `Var of 'a ] as 'b) -> int = <fun>

# eval ["x", 2] expr;;
- : int = 12

Alternatively, you could implement the derivative function as a camlp4 macro
and use OCaml to evaluate the result.

In SML:

- fun assoc x [] = raise Empty
    | assoc x ((k, v)::t) = if x=k then v else assoc x t;
val assoc = fn : ''a -> (''a * 'b) list -> 'b

- fun eval state = fn
      Num n => n
    | Var v => assoc v state
    | Plus(e1, e2) => eval state e1 + eval state e2
    | Times(e1, e2) => eval state e1 * eval state e2;
val eval = fn : (string * int) list -> expr -> int

- eval [("x", 2)] expr;;
val it = 12 : int

Similarly: Out of these languages, only Mathematica seems to allow you to
directly simplify the derivative's result. I don't know about Lisp, but for
other solution it doesn't seem so.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Alan Crowe
Subject: Re: CL failure stories?
Date: 
Message-ID: <86r7agf7wu.fsf@cawtech.freeserve.co.uk>
Jon Harrop <······@jdh30.plus.com> writes:
> Pisin Bootvong wrote:
> > (defmethod d ((n number) var) 0)
> > (defmethod d ((x symbol) var) (if (eql x var) 1 0))
> > (defmethod d ((expr list) var)
> >   (destructuring-bind (op e1 e2) expr
> >     (case op
> >       (+ `(+ (d e1 var) (d e2 var)))
> >       (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
> > 
> > That's 7 line of code.
> 
> That's vastly better than the original Lisp code!

I prefer a more nuanced appreciation of the merits and
de-merits of the various pieces of code. My differentiate
function was written specifically for Slashdot, in the
knowledge that many of my readers would be unfamiliar with
the language. I attempted to write my code so that you could
follow it even if your knowledge of CL was confined to
knowing that it used a fully parenthesised prefix
notation. Accordlying I tried to confine myself to operators
with self-explanatory names. It seemed worth the risk of
using typecase because the type names, number, symbol, list,
would indicate what was going on.

Some educators like to hold back on teaching QUOTE. Abelson
and Sussman delay until page 142. This suggested to me that
I should avoid quote. That was not possible, but 

(defparameter plus '+)

(list plus ...

kept QUOTE out of the body of the main function.

The post was to a thread on the best language for beginner
programmers. I proposed that the programming language course
could profitably be integrated with other parts of the
curriculum.  For example you might want the programming
course and the calculus course to be mutually
supporting. This would influence your choice of programming
language.

Mutual support means that the calculus course is a source of
algorithms for the beginning programmer to practise on,
while coding them up offers the beginner at calculus an
alternative perspective on what he is doing when he
differentiates.

If you have already learnt CL, you can code differentiation very
slickly, but that would not be an example of the mutual
support that I was proposing, and thus unsuitable for my
post on slashdot.

Leaving the issue of particular pieces of code, why might CL
be suitable for an integrated curriculum?

Writing your own code is a very solid way of studying. So
long as you harbour misconceptions about an algorithm, your
code will stubbornly refuse to work.

Writing your own code is also a very slow way of
studying. The designer of an integrated curriculum must
steer a tricky course between superficallity and excessive
course work. CL offers the designer the possibility of
writing macros to provide building blocks that student can
use in his code. The designer of the curriculum can tune the
students involvement with the material and the time devoted
to programming by adjusting the macropackage that is part of
the course.

Perhaps a pattern matching capability might help the students
code up differentiation more quickly. Put it in. Perhaps
they end up taking a long time doing infix + and * and are
slow to realise that + and * are just functions. Take it out
again.

Perhaps students are tripping over the brackets in 

   (let ((y (f x))) ....)

Provide 

   (bind y (f x) ...)

The choice of a computer programming language for use in
education is fraught. Things will surely go wrong in the
class room. CL has the selling point that you can adjust the
language with macrology in time for next term.

Alan Crowe
Edinburgh
Scotland
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <4357e518$0$49776$ed2e19e4@ptn-nntp-reader04.plus.net>
Alan Crowe wrote:
>> That's vastly better than the original Lisp code!
> 
> I prefer a more nuanced appreciation of the merits and
> de-merits of the various pieces of code. My differentiate
> function was written specifically for Slashdot, in the
> knowledge that many of my readers would be unfamiliar with
> the language.
> ...

I am unfamiliar with Lisp but I find the other Lisp implementations that
have been posted easier to understand than yours.

Your example code led me to believe that Lisp was a poor choice for this
task. Other people have shown that my conclusion was wrong.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Alan Crowe
Subject: Re: CL failure stories?
Date: 
Message-ID: <861x2fp4e3.fsf@cawtech.freeserve.co.uk>
Jon Harrop <······@jdh30.plus.com> writes:
> I am unfamiliar with Lisp but I find the other Lisp implementations that
> have been posted easier to understand than yours.

Can I persuade you to post your criticisms on Slashdot?

I could follow up my own post, saying that other people
found that my code lost more in verbosity than it gained by
simplicity, and posting the code that other people said was
best.

However, slashdot is a public forum. Post on behalf of
others would seem rather odd.

Alan Crowe
Edinburgh
Scotland
From: Kenny Tilton
Subject: Re: CL failure stories?
Date: 
Message-ID: <%oY5f.7658$h25.2343@news-wrt-01.rdc-nyc.rr.com>
Alan Crowe wrote:
> Jon Harrop <······@jdh30.plus.com> writes:
> 
>>Pisin Bootvong wrote:
>>
>>>(defmethod d ((n number) var) 0)
>>>(defmethod d ((x symbol) var) (if (eql x var) 1 0))
>>>(defmethod d ((expr list) var)
>>>  (destructuring-bind (op e1 e2) expr
>>>    (case op
>>>      (+ `(+ (d e1 var) (d e2 var)))
>>>      (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
>>>
>>>That's 7 line of code.
>>
>>That's vastly better than the original Lisp code!
> 
> 
> I prefer a more nuanced appreciation of the merits and
> de-merits of the various pieces of code. My differentiate
> function was written specifically for Slashdot, in the
> knowledge that many of my readers would be unfamiliar with
> the language.

Nonsense. How those three methods of the same name get differentiated is 
obvious whether one knows about multi-methods or not. In fact, if not, 
one immediately groks them. Why sell students short?

-- 
Kenny

Why Lisp? http://wiki.alu.org/RtL_Highlight_Film

"I've wrestled with reality for 35 years, Doctor, and I'm happy to state 
I finally won out over it."
     Elwood P. Dowd, "Harvey", 1950
From: Alan Crowe
Subject: Re: CL failure stories?
Date: 
Message-ID: <86y84nnotl.fsf@cawtech.freeserve.co.uk>
Kenny Tilton <·······@nyc.rr.com> writes:
> Nonsense. How those three methods of the same name get differentiated is 
> obvious whether one knows about multi-methods or not. In fact, if not, 
> one immediately groks them. Why sell students short?

I was going to suggest that you post your criticisms on
slashdot, but when I used by browser to check that I was
giving you the correct URL I noticed that the reply button
had gone. "This discussion has been archived. No new
comments can be posted."

That is a pity. If I want to make a similar point in future
I will use one of the slick versions that I have seen on
c.l.l.

I don't think that any of us /really/ know which version works
best for a wide audience, but I claim no special insight and
am happy to go with the average guess. (or to defer to those
with relevant experience)

(defmethod d ((n number) var) 0)
(defmethod d ((x symbol) var) (if (eql x var) 1 0))
(defmethod d ((expr list) var)
  (destructuring-bind (op e1 e2) expr
    (case op
      (+ `(+ (d e1 var) (d e2 var)))
      (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
;;; posted on c.l.l. by Pisin Bootvong

looks best to me at the moment.

Alan Crowe
Edinburgh
Scotland
From: Florian Weimer
Subject: Re: CL failure stories?
Date: 
Message-ID: <87k6g6ovxn.fsf@mid.deneb.enyo.de>
* Alan Crowe:

> (defmethod d ((n number) var) 0)
> (defmethod d ((x symbol) var) (if (eql x var) 1 0))
> (defmethod d ((expr list) var)
>   (destructuring-bind (op e1 e2) expr
>     (case op
>       (+ `(+ (d e1 var) (d e2 var)))
>       (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
> ;;; posted on c.l.l. by Pisin Bootvong
>
> looks best to me at the moment.

Except that it's buggy.  Maybe it's not the best way to write such
things in the end. 8-)
From: David Steuber
Subject: Re: CL failure stories?
Date: 
Message-ID: <87y84m6cbg.fsf@david-steuber.com>
Kenny Tilton <·······@nyc.rr.com> writes:

> Why sell students short?

Perhaps you think their stock will fall in the near term.

-- 
http://www.david-steuber.com/
The UnBlog: An island of conformity in a sea of quirks.
The lowest click through rate in Google's AdSense program.
----------------------------------------------------------------------
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <2C66f.36$pa3.18098@typhoon.nyu.edu>
Jon Harrop wrote:
> Pisin Bootvong wrote:
> 
>>By:
>> 1.) Reduce meaningful variable names ('variable', 'form') to ('v',
>>'expr'), which could be either good or bad depending on the context.
>> 2.) Use generic function to make pattern matching clearer.
>> 3.) Make method returning trivial expression a one-liner. It doesn't
>>break the style too much and still is readable.
>> 3.) Use Backquote for list construction, so that the result return
>>form looks clearer.
>> 4.) Optionally, Use '+ and '* as symbol directly without defparameter.
>>It's possible to put quoted '+' in returned expression and IMO it is
>>clearer that way. Mathematica version also does this.
>>
>>(defmethod d ((n number) var) 0)
>>(defmethod d ((x symbol) var) (if (eql x var) 1 0))
>>(defmethod d ((expr list) var)
>>  (destructuring-bind (op e1 e2) expr
>>    (case op
>>      (+ `(+ (d e1 var) (d e2 var)))
>>      (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
>>
>>That's 7 line of code.
> 
> 
> That's vastly better than the original Lisp code!

... and you didn't even consider the possibility :}

>>Out of these languages, only Lisp seems to allow you to directly
>>evaluate the derivative's result. I don't know about Mathematica, but
>>for other solution it doens't seem so.
> 
> 
> Mathematica already did. :-)
> 
> 
>>You can do this in Lisp:
>>
>>CL-USER> (d '(* x x) 'x)
>> ==> (+ (* 1 X) (* X 1))
>>CL-USER> (eval `(let ((x 2)) ,*))
>> ==> 4
>>
>>I think there is also eval in Haskell and others, but I would like to
>>see how the code will change regarding the added requirement.
> 
> 
> In SML and OCaml you would write eval yourself. For example, in OCaml:
> 
> # let rec eval state = function
>     | `Num n -> n
>     | `Var v -> List.assoc v state
>     | `Plus(e1, e2) -> eval state e1 + eval state e2
>     | `Times(e1, e2) -> eval state e1 * eval state e2;;
> val eval :
>   ('a * int) list ->
>   ([< `Num of int | `Plus of 'b * 'b | `Times of 'b * 'b
>     | `Var of 'a ] as 'b) -> int = <fun>
> 
> # eval ["x", 2] expr;;
> - : int = 12

Thanks, but no cigar.  This is a poor match to the (limited) CL EVAL 
function, plus.....  it makes your program longer.  Are you gettin the 
hint? :)

> Alternatively, you could implement the derivative function as a camlp4 macro
> and use OCaml to evaluate the result.
> 
> In SML:
> 
> - fun assoc x [] = raise Empty
>     | assoc x ((k, v)::t) = if x=k then v else assoc x t;
> val assoc = fn : ''a -> (''a * 'b) list -> 'b
> 
> - fun eval state = fn
>       Num n => n
>     | Var v => assoc v state
>     | Plus(e1, e2) => eval state e1 + eval state e2
>     | Times(e1, e2) => eval state e1 * eval state e2;
> val eval = fn : (string * int) list -> expr -> int
> 
> - eval [("x", 2)] expr;;
> val it = 12 : int

Still getting longer with still one fraction of the functionality of CL. :)

> 
> Similarly: Out of these languages, only Mathematica seems to allow you to
> directly simplify the derivative's result. I don't know about Lisp, but for
> other solution it doesn't seem so.

Comparing Mathematica to CL (or to any other straight PL) is the 
proverbial apples and oranges task.  If you insist on using Mathematica 
as a yardstick, then you should measure it up to Axiom (guess what it is 
written in?) or Maxima (guess again), just to name two.

Cheers
--
Marco
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: CL failure stories?
Date: 
Message-ID: <873bmwo3p9.fsf@qrnik.zagroda>
"Pisin Bootvong" <··········@gmail.com> writes:

>> Given that all of these implementations are shorter and clearer than your
>> Lisp, can you please explain why you are advocating Lisp for this task?

> By:
>  1.) Reduce meaningful variable names ('variable', 'form') to ('v',
> 'expr'), which could be either good or bad depending on the context.
>  2.) Use generic function to make pattern matching clearer.
>  3.) Make method returning trivial expression a one-liner. It doesn't
> break the style too much and still is readable.
>  3.) Use Backquote for list construction, so that the result return
> form looks clearer.
>  4.) Optionally, Use '+ and '* as symbol directly without defparameter.
> It's possible to put quoted '+' in returned expression and IMO it is
> clearer that way. Mathematica version also does this.
>
> (defmethod d ((n number) var) 0)
> (defmethod d ((x symbol) var) (if (eql x var) 1 0))
> (defmethod d ((expr list) var)
>   (destructuring-bind (op e1 e2) expr
>     (case op
>       (+ `(+ (d e1 var) (d e2 var)))
>       (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
>
> That's 7 line of code.

Points 1 and the first 3 aren't specific to Lisp. Points 2, the second 3,
and 4 say how to make one Lisp solution neater than another Lisp solution,
they don't show Lisp advantages over other languages either.

The case of + has two commas missing. When the syntax and types of the
language and the metalanguage are the same, confusing them by mistake
might be left undetected even at runtime.

Here is the same in my Kogut (5 lines):

let D
   (is NUMBER) v {0}
   (x & is SYMBOL) v {if (x == v) {1} else {0}}
   '+':[e1 e2] v {'+':[(D e1 v) (D e2 v)]}
   '*':[e1 e2] v {'+':['*':[(D e1 v) e2] '*':[e1 (D e2 v)]]};

I don't claim that it's "better" than in Lisp, just that Lisp is not
unique.

> Out of these languages, only Lisp seems to allow you to directly
> evaluate the derivative's result.

Indeed. It's easier than differentiation though:

let E
   (n & is NUMBER) env {n}
   (x & is SYMBOL) env {env x}
   '+':[e1 e2] env {E e1 env + E e2 env}
   '*':[e1 e2] env {E e1 env * E e2 env};

D '*':[#x #x] #x->E (function [#x {7}])->WriteLine;
// Result: 14

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Pisin Bootvong
Subject: Re: CL failure stories?
Date: 
Message-ID: <1129912997.121628.33530@g44g2000cwa.googlegroups.com>
Marcin 'Qrczak' Kowalczyk wrote:
> "Pisin Bootvong" <··········@gmail.com> writes:
>
> >> Given that all of these implementations are shorter and clearer than your
> >> Lisp, can you please explain why you are advocating Lisp for this task?
>
> > By:
> >  1.) Reduce meaningful variable names ('variable', 'form') to ('v',
> > 'expr'), which could be either good or bad depending on the context.
> >  2.) Use generic function to make pattern matching clearer.
> >  3.) Make method returning trivial expression a one-liner. It doesn't
> > break the style too much and still is readable.
> >  3.) Use Backquote for list construction, so that the result return
> > form looks clearer.
> >  4.) Optionally, Use '+ and '* as symbol directly without defparameter.
> > It's possible to put quoted '+' in returned expression and IMO it is
> > clearer that way. Mathematica version also does this.
> >
> > (defmethod d ((n number) var) 0)
> > (defmethod d ((x symbol) var) (if (eql x var) 1 0))
> > (defmethod d ((expr list) var)
> >   (destructuring-bind (op e1 e2) expr
> >     (case op
> >       (+ `(+ (d e1 var) (d e2 var)))
> >       (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
> >
> > That's 7 line of code.
>
> Points 1 and the first 3 aren't specific to Lisp. Points 2, the second 3,
> and 4 say how to make one Lisp solution neater than another Lisp solution,
> they don't show Lisp advantages over other languages either.
>

I am not saying that any of that point is specific to Lisp. The point
I'm trying to make is that the reason codes in other languages are
shorter has nothing to do with the language it is written in. The code
is only shorter because the author of the original Lisp code doesn't
focus his code on shortness; he just want to show some very simple
example lisp code. He chose long variable names and to use simple
construct so that non-lisper might understand better. How could
Haskell's version be that short if the author decide not to use Pattern
matching?

It will be equally nonsense if I took some SML, OCaml or Haskell code
that haven't yet used all its appropriate features and claim that such
language lacks a capability of doing so.

> The case of + has two commas missing. When the syntax and types of the
> language and the metalanguage are the same, confusing them by mistake
> might be left undetected even at runtime.
>

This one can be found the first time you test the program. In fact, I
have already found the error. But I posted the wrong version to the
group. Not any type of language can prevent you from submiting the
wrong version of code to newsgroup :-)

> Here is the same in my Kogut (5 lines):
>
> let D
>    (is NUMBER) v {0}
>    (x & is SYMBOL) v {if (x == v) {1} else {0}}
>    '+':[e1 e2] v {'+':[(D e1 v) (D e2 v)]}
>    '*':[e1 e2] v {'+':['*':[(D e1 v) e2] '*':[e1 (D e2 v)]]};
>
> I don't claim that it's "better" than in Lisp, just that Lisp is not
> unique.
>

To be honest though, Lisp really loses in pattern matching.
If the expression can not be easily distinguish by type (symbol, number
or list) 'defmethod' would not have work and Lisp version will be
longer.
I really like to have standard pattern matching construct in Lisp.
It's possible to build my own, but that will add more line of code than
language with such standard feature. Pattern matching is very common
and convenient feature to have as a statndard anyway.

> > Out of these languages, only Lisp seems to allow you to directly
> > evaluate the derivative's result.
>
> Indeed. It's easier than differentiation though:
>
> let E
>    (n & is NUMBER) env {n}
>    (x & is SYMBOL) env {env x}
>    '+':[e1 e2] env {E e1 env + E e2 env}
>    '*':[e1 e2] env {E e1 env * E e2 env};
>
> D '*':[#x #x] #x->E (function [#x {7}])->WriteLine;
> // Result: 14
>

Yes, but the point is why do you have to write this function? If you
have to write it, then should not this code be counted as additional
line of code? Regarding the language that already have eval function,
it should be possible to pass the result of 'diff' to 'eval' directly.

> --
>    __("<         Marcin Kowalczyk
>    \__/       ······@knm.org.pl
>     ^^     http://qrnik.knm.org.pl/~qrczak/
From: Thomas F. Burdick
Subject: Re: CL failure stories?
Date: 
Message-ID: <xcvll0md6vm.fsf@conquest.OCF.Berkeley.EDU>
"Pisin Bootvong" <··········@gmail.com> writes:

> To be honest though, Lisp really loses in pattern matching.
> If the expression can not be easily distinguish by type (symbol, number
> or list) 'defmethod' would not have work and Lisp version will be
> longer.
> I really like to have standard pattern matching construct in Lisp.
> It's possible to build my own, but that will add more line of code than
> language with such standard feature. Pattern matching is very common
> and convenient feature to have as a statndard anyway.

Nonsense.  DEFTYPE, the compound form of the CONS typespec, and
TYPECASE give you all you need for simple pattern matching.

  (deftype list-of (&rest types)
    (labels ((build (types)
               (cond
                 ((null types) 'null)
                 ((atom types) types)
                 (t
                  (list 'cons
                        (if (and (listp (car types))
                                 (eql (caar types) 'quote)
                                 (null (cddar types)))
                            `(eql ,(second (car types)))
                            (car types))
                        (build (cdr types)))))))
      (build types)))

  (deftype defun-form () '(list-of 'defun symbol list . t))

  (typecase '(defun foo () 'hi)
    (number 'nope)
    (symbol 'nope)
    (defun-form 'yup))
  => YUP

And if you want something more advanced, you can always use something
like cl-unification.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | Free Mumia Abu-Jamal! |
     ,--'    _,'   | Abolish the racist    |
    /       /      | death penalty!        |
   (   -.  |       `-----------------------'
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: CL failure stories?
Date: 
Message-ID: <87d5ly4nmk.fsf@qrnik.zagroda>
···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Nonsense.  DEFTYPE, the compound form of the CONS typespec,
> and TYPECASE give you all you need for simple pattern matching.

It helps only with checking whether the pattern matches. It doesn't
help with extracting parts which correspond to variables in the
pattern.

If pattern matching didn't include that, it would be called predicate
checking.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <Z196f.37$pa3.17713@typhoon.nyu.edu>
Pisin Bootvong wrote:
> 
> To be honest though, Lisp really loses in pattern matching.
> If the expression can not be easily distinguish by type (symbol, number
> or list) 'defmethod' would not have work and Lisp version will be
> longer.
> I really like to have standard pattern matching construct in Lisp.
> It's possible to build my own, but that will add more line of code than
> language with such standard feature. Pattern matching is very common
> and convenient feature to have as a statndard anyway.

Shameless plug :)

http://common-lisp.net/project/cl-unification

Why have pattern matching when you can have full blown unification? :)

Cheers
--
Marco
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <43598308$0$15037$ed2619ec@ptn-nntp-reader02.plus.net>
Pisin Bootvong wrote:
> Yes, but the point is why do you have to write this function? If you 
> have to write it, then should not this code be counted as additional
> line of code? Regarding the language that already have eval function,
> it should be possible to pass the result of 'diff' to 'eval' directly.

I believe eval conflicts with static typing. There are languages like
MetaOCaml that allow you to construct s-exprs in OCaml but you cannot
manipulate them as you can in Lisp.

The tradeoff seems to be that either eval is much faster (as in Lisp) or
everything else is either much shorter (and slow) or much faster (and
verbose) or much quicker to compile.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <1O57f.38$pa3.19149@typhoon.nyu.edu>
Jon Harrop wrote:
> Pisin Bootvong wrote:
> 
>>Yes, but the point is why do you have to write this function? If you 
>>have to write it, then should not this code be counted as additional
>>line of code? Regarding the language that already have eval function,
>>it should be possible to pass the result of 'diff' to 'eval' directly.
> 
> 
> I believe eval conflicts with static typing. There are languages like
> MetaOCaml that allow you to construct s-exprs in OCaml but you cannot
> manipulate them as you can in Lisp.

Din't know that, but I am not surpised.
> 
> The tradeoff seems to be that either eval is much faster (as in Lisp) or
> everything else is either much shorter (and slow) or much faster (and
> verbose) or much quicker to compile.
> 

Yes.  And CL achieves the "optimum" w.r.t. these conflicting goals.

Cheers
--
Marco
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <435cf0ad$0$6498$ed2619ec@ptn-nntp-reader01.plus.net>
Marco Antoniotti wrote:
>> The tradeoff seems to be that either eval is much faster (as in Lisp) or
>> everything else is either much shorter (and slow) or much faster (and
>> verbose) or much quicker to compile.
> 
> Yes.  And CL achieves the "optimum" w.r.t. these conflicting goals.

I am surprised anyone would think that. Objectively, you would need to be
calling "eval" about 5 times as much as any other function for it to pay
off...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <rv67f.40$pa3.19153@typhoon.nyu.edu>
Jon Harrop wrote:
> Marco Antoniotti wrote:
> 
>>>The tradeoff seems to be that either eval is much faster (as in Lisp) or
>>>everything else is either much shorter (and slow) or much faster (and
>>>verbose) or much quicker to compile.
>>
>>Yes.  And CL achieves the "optimum" w.r.t. these conflicting goals.
> 
> 
> I am surprised anyone would think that. Objectively, you would need to be
> calling "eval" about 5 times as much as any other function for it to pay
> off...
> 

As usual you are missing the bigger picture.  The number of calls to 
EVAL is irrelevant.  As a matter of fact calling EVAL is refrained in 
regular code, but it is nice to have it available.

The point is that the optimum w.r.t. conflicting goals is achieved by CL 
and not, sorry to point this out, by SML, OCaml et similia, despite the 
very useful static type checking bit.

Cheers
--
Marco
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <435cf806$0$6498$ed2619ec@ptn-nntp-reader01.plus.net>
Marco Antoniotti wrote:
> The point is that the optimum w.r.t. conflicting goals is achieved by CL
> and not, sorry to point this out, by SML, OCaml et similia, despite the
> very useful static type checking bit.

The available quantitative evidence clearly contradicts this:

  http://www.ffconsultancy.com/free/ray_tracer/languages.html

If you believe otherwise then write a Lisp implementation of that program
that is not either much longer, much slower to run or much slower to
compile than all of the other languages.

In point of fact, Lisp in the least optimal in this sense. Even worse than
Java...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <yma7f.41$pa3.19060@typhoon.nyu.edu>
Jon Harrop wrote:
> Marco Antoniotti wrote:
> 
>>The point is that the optimum w.r.t. conflicting goals is achieved by CL
>>and not, sorry to point this out, by SML, OCaml et similia, despite the
>>very useful static type checking bit.
> 
> 
> The available quantitative evidence clearly contradicts this:
> 
>   http://www.ffconsultancy.com/free/ray_tracer/languages.html
> 
> If you believe otherwise then write a Lisp implementation of that program
> that is not either much longer, much slower to run or much slower to
> compile than all of the other languages.
> 
> In point of fact, Lisp in the least optimal in this sense. Even worse than
> Java...

I have been through *your* quantitations and you have failed to convince 
me, or for what matters, all the other lispers around.

We know that C, C++, Fortran, OCaml etc etc can produce code that is 
faster than the best current CL implementation.  That was never disputed.

What you refuse to consider is that CL is optimal when you take into 
account flexibility, extensibility, elegance, orthogonality and ease of 
use, combined with reasonable (read: nearly as good as C) efficiency. 
In that sense CL is optimal.  You have failed to get this point for a 
long time now.  I am just doing the "repetita juvant" routine :)

Cheers
--
Marco
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <435d3964$0$6532$ed2619ec@ptn-nntp-reader01.plus.net>
Marco Antoniotti wrote:
> We know that C, C++, Fortran, OCaml etc etc can produce code that is
> faster than the best current CL implementation.

and shorter in the case of OCaml and SML.

> that was never disputed.

Actually it was. Thomas Fischbacher tried to dispute, for example.

> What you refuse to consider is that CL is optimal when you take into
> account flexibility, extensibility, elegance, orthogonality and ease of
> use, combined with reasonable (read: nearly as good as C) efficiency.
> In that sense CL is optimal.

Sure. I am not going to contest your faith-based views here.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <TDb7f.43$pa3.17933@typhoon.nyu.edu>
Jon Harrop wrote:
> Marco Antoniotti wrote:
> 
>>We know that C, C++, Fortran, OCaml etc etc can produce code that is
>>faster than the best current CL implementation.
> 
> 
> and shorter in the case of OCaml and SML.

Only if you do not take into consideration what you consider "faith 
based" issue, which are instead "quality of life issues" to many 
programmers :)  Besides, the little derivative examples quickly showed 
that your OCaml and SML were getting longer and uglier as soon as you 
added the required flexibility, extensibility and expressiveness.  :)

>>that was never disputed.
> 
> 
> Actually it was. Thomas Fischbacher tried to dispute, for example.
> 

Let's have Thomas Fishbacher dispute them.  I will not, until I have a 
CL compiler that actually beats the crap out of the OCaml one.

>>What you refuse to consider is that CL is optimal when you take into
>>account flexibility, extensibility, elegance, orthogonality and ease of
>>use, combined with reasonable (read: nearly as good as C) efficiency.
>>In that sense CL is optimal.
> 
> 
> Sure. I am not going to contest your faith-based views here.
> 

Just because you don't get them, it does not mean they are less real and 
grounded than your blind faith in static typing (which, again, is a good 
thing), pattern matching, and... brevity. :)

Cheers
--
Marco
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <435d60c3$0$49766$ed2e19e4@ptn-nntp-reader04.plus.net>
Marco Antoniotti wrote:
> Besides, the little derivative examples quickly showed
> that your OCaml and SML were getting longer and uglier as soon as you
> added the required flexibility, extensibility and expressiveness.  :)

If a function is built-in in one language and not in the other then it will
be shorter in the language where it is built-in, yes.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <Lpr7f.45$pa3.19143@typhoon.nyu.edu>
Jon Harrop wrote:
> Marco Antoniotti wrote:
> 
>>Besides, the little derivative examples quickly showed
>>that your OCaml and SML were getting longer and uglier as soon as you
>>added the required flexibility, extensibility and expressiveness.  :)
> 
> 
> If a function is built-in in one language and not in the other then it will
> be shorter in the language where it is built-in, yes.
> 

Very true.  But if the buil-in function in language A is called 
'differentiate' and 'd' in language B then the program in language B is 
shorter, while that in language A is more readable.  But since 
"readability" is a "quality of life" issue, it does not enter your small 
set of measuring tools.  :)

Cheers
--
Marco
From: Cruise Director
Subject: Re: CL failure stories?
Date: 
Message-ID: <1130231285.118407.49900@z14g2000cwz.googlegroups.com>
Marco Antoniotti wrote:
> Besides, the little derivative examples quickly showed
> that your OCaml and SML were getting longer and uglier as soon as you
> added the required flexibility, extensibility and expressiveness.  :)

I am curious: are you arguing for argument's sake?  Are you merely
trying to convince, say, 3 people who are actually hanging on your
words at this point?  Or do you intend a grander evangelism, and are
merely bogged down in misdirected energy?  None of this talk is going
to prove a damn thing about Lisp or OCaml or SML or whatnot.  I'm
curious what marketing battles you really would like to win, and how
you'd pursue them.  I just can't see the sense in trying to convert a
market of, say, 3 uber-geeks from their favorite abstruse language to
another abstruse language.

Cheers,
Brandon
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <Zmr7f.44$pa3.19722@typhoon.nyu.edu>
Cruise Director wrote:
> Marco Antoniotti wrote:
> 
>>Besides, the little derivative examples quickly showed
>>that your OCaml and SML were getting longer and uglier as soon as you
>>added the required flexibility, extensibility and expressiveness.  :)
> 
> 
> I am curious: are you arguing for argument's sake?  Are you merely
> trying to convince, say, 3 people who are actually hanging on your
> words at this point?  Or do you intend a grander evangelism, and are
> merely bogged down in misdirected energy?  None of this talk is going
> to prove a damn thing about Lisp or OCaml or SML or whatnot.  I'm
> curious what marketing battles you really would like to win, and how
> you'd pursue them.  I just can't see the sense in trying to convert a
> market of, say, 3 uber-geeks from their favorite abstruse language to
> another abstruse language.

Sometimes arguing for argument's sake is the only sane thing to do :) 
It keeps the cogs oiled and the spirits high.  So, yes, in this case I 
guess I am just arguing for argument's sake. :)

Cheers
--
Marco
From: Dr. Thomas Fischbacher
Subject: Re: CL failure stories?
Date: 
Message-ID: <djk4p1$fjk$1@nwrdmz02.dmz.ncs.ea.ibs-infra.bt.com>
Jon Harrop wrote:

>>that was never disputed.
> 
> 
> Actually it was. Thomas Fischbacher tried to dispute, for example.

Actually, Thomas Fischbacher *did* make a very different point (which 
you, by the way, just failed to get).

Your methodology of doing quantitative work is so badly screwed that one 
just cannot consider your "findings" as serious results. I still 
maintain this claim.

So, just to make it explicit once more (not that I would have any hope 
left to finally make you get that point):

(1) Your only-both-faster-and-shorter-improvements-allowed raytracer 
implementation comparison criterion is rubbish in the most fundamental form.

Reason: If one were to adopt such a criterion, it would allow to enter 
the very same system twice into the comparison with different 
implementations, one short-and-slow, one long-and-fast, none of which 
would be bounded by the other. So, you may "validly" (i.e. according to 
the rules of the study) compare X to X and get the result that X is far 
better than X.

Indeed:

http://www.cip.physik.uni-muenchen.de/~tf/raytracer/

(2) People repeatedly did point out major flaws in your arguments and 
views. If you were interested in a honest scientific discussion of the 
issue (and I by now have every reason in the world to believe you are 
not), you would at least have provided pointers to those objections, say 
in the "related links" section of your web page:

http://www.ffconsultancy.com/free/ray_tracer/languages.html

--
Dr. Thomas Fischbacher
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <435e65d5$0$73615$ed2619ec@ptn-nntp-reader03.plus.net>
Thomas, at the end of the day your Lisp implementation only provides more
evidence to support my conclusions. It is faster, yes, but it is also so
much more verbose that it actually fits on the current speed vs verbosity
line for SBCL-compiled Lisp, albeit well off the top of the graph.

Dr. Thomas Fischbacher wrote:
> Reason: If one were to adopt such a criterion, it would allow to enter
> the very same system twice into the comparison with different
> implementations, one short-and-slow, one long-and-fast, none of which
> would be bounded by the other.

This is exactly what I have already done. There are four implementations in
each language, some shorter and slower, others longer and faster.

> So, you may "validly" (i.e. according to 
> the rules of the study) compare X to X and get the result that X is far
> better than X.

There are no rules, just code and measurements summed up in one graph.
People can draw their own conclusions by looking at it.

> (2) People repeatedly did point out major flaws in your arguments and
> views. If you were interested in a honest scientific discussion of the
> issue (and I by now have every reason in the world to believe you are
> not), you would at least have provided pointers to those objections, say
> in the "related links" section of your web page:

Your work received three comments when you posted it on the caml-list, all
of which were negative:

http://caml.inria.fr/pub/ml-archives/caml-list/2005/10/ac501a5b7c46d7c49b84d944807b5d8a.en.html
http://caml.inria.fr/pub/ml-archives/caml-list/2005/10/fef04f45c8347f94ec0475c9923d4352.en.html
http://caml.inria.fr/pub/ml-archives/caml-list/2005/10/7753e880a562bc6cb71376dcf2db1c06.en.html

My work has receieved thousands of responses and has been ported to several
other languages. The vast majority of the responses have been positive.

Ultimately, if you want to substantiate your belief that Lisp can do better
on this benchmark, write some Lisp implementations that lie further down
and to the right.

I believe this benchmark hits a weak point of Lisp. I would like to see a
similar comparison for a benchmark that hits a strong point of Lisp. If you
want to do something constructive, do that.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Cruise Director
Subject: verbosity vs. inscrutability (was Re: CL failure stories?)
Date: 
Message-ID: <1130278533.071296.147350@g49g2000cwa.googlegroups.com>
Jon Harrop wrote:
>
> My work has receieved thousands of responses and has been ported to several
> other languages. The vast majority of the responses have been positive.

Allow me to provide some neutral comments then.  For those coming late
to dinner, we're discussing the "performance vs. verbosity" benchmark
at
http://www.ffconsultancy.com/free/ray_tracer/languages.html

Your metric for "verbosity" is the Line Of Code.  I notice, however,
that the provided Lisp and Scheme examples have far fewer characters
per line, on average, than those of the other languages.  The other
languages read as big, dense, meaty chunks of code.  I think a metric
of "total characters" or "function entry points" might be more
appropriate if you're intending to measure verbosity.  It is true that
Lisp / Scheme code needs more newlines to be readable.  But that by
itself doesn't mean it's more verbose; it just means it's harder to
read.

I'm also not impressed by the range of LOC in these languages.  We're
not talking about orders of magnitude of difference.  Show me a 4x..10x
difference in necessary typing, and yes I'd say one language has an
advantage over another.  Failing that, how much you have to type has to
be weighed against how much you have to *think*.  Do we think more or
less?  Do we spend substantially fewer hours coding?  Or is it really
down to our skill and familiarity with a language.

In that case, what makes one language easier to learn than another?  In
my experience, the quality of the available tools is the primary
determinant.  If you can't even get things working, if they're a bitch
to work with, then you're not going to get into a positive feedback
loop and progress will be slow.

Anyways, it might be more honest if you offered several metrics for
"verbosity," such as LOC, total characters, function entry points, or
whatever else is available in the benchmarking world.  Leaving those
numbers side-by-side without judgement, people could decide for
themselves whether any of them represent "verbosity."  Whereas the
graph, as it stands now, reminds me of a lecture I received in 7th
grade: "How To Lie With Statistics."


Cheers,
Brandon Van Every
From: Greg Lindahl
Subject: Re: verbosity vs. inscrutability (was Re: CL failure stories?)
Date: 
Message-ID: <435eb589$1@news.meer.net>
In article <························@g49g2000cwa.googlegroups.com>,
Cruise Director <···········@gmail.com> wrote:

>Your metric for "verbosity" is the Line Of Code.

I hate measures like this. One long-running debate in the parallel
programming field is whether OpenMP is easier to use than MPI. If you
count lines of code, it's clear that OpenMP is the winner. Ten years
into the debate, people finally started studing the actual *effort* to
produce a parallel program given a serial one, and the very
preliminary result is that they're about the same *effort*.

I'm looking forward to the final results; I've only been waiting a
decade...

-- greg
From: Jon Harrop
Subject: Re: verbosity vs. inscrutability (was Re: CL failure stories?)
Date: 
Message-ID: <435ec8e9$0$73604$ed2619ec@ptn-nntp-reader03.plus.net>
Greg Lindahl wrote:
> In article <························@g49g2000cwa.googlegroups.com>,
> Cruise Director <···········@gmail.com> wrote:
> 
>>Your metric for "verbosity" is the Line Of Code.
> 
> I hate measures like this.

So do I. Unfortunately, I couldn't think of anything better.

I also tried tokens, words and bytes and they all gave basically the same
results. Lisp does relatively much better when counting words because it
contains "words" like "most-positive-double-float" where the other
languages contain much shorter words.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Jon Harrop
Subject: Re: verbosity vs. inscrutability (was Re: CL failure stories?)
Date: 
Message-ID: <435ecb22$0$73604$ed2619ec@ptn-nntp-reader03.plus.net>
Cruise Director wrote:
> I'm also not impressed by the range of LOC in these languages.  We're
> not talking about orders of magnitude of difference.  Show me a 4x..10x
> difference in necessary typing, and yes I'd say one language has an
> advantage over another.

Thomas Fischbacher has written a Lisp implementation for SBCL that has many
more low-level optimisations applied manually. AFAIK, he has been doing
high-performance programming in Lisp for many years, has published research
and can be considered an "advanced" user.

His current implementation is 303LOC and has performance between that of the
55 and 66LOC OCaml implementations on my machine. So it is 5x more verbose
by this measure.

One interesting point that fell out of the benchmark is that, I think,
macros give Lisp a highly non-linear speed vs LOC curve. In the other
languages you must unroll loops and inline functions to remove unnecessary
allocations manually. As Juho Snellman showed so elegantly, you can write
macros to do this work for you, which saves a lot of LOC.

I think that is probably the most interesting result of the benchmark.

> Anyways, it might be more honest if you offered several metrics for
> "verbosity," such as LOC, total characters, function entry points, or
> whatever else is available in the benchmarking world.  Leaving those
> numbers side-by-side without judgement, people could decide for
> themselves whether any of them represent "verbosity."  Whereas the
> graph, as it stands now, reminds me of a lecture I received in 7th
> grade: "How To Lie With Statistics."

I think it is better to simply lay the programs down side-by-side. The
comparison pages do this for C++, SML and OCaml.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Dr. Thomas Fischbacher
Subject: Re: verbosity vs. inscrutability (was Re: CL failure stories?)
Date: 
Message-ID: <djmk7o$6qb$1@nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com>
Dear c.l.l readers,

Jon Harrop wrote:

> Thomas Fischbacher has written a Lisp implementation for SBCL that has many
> more low-level optimisations applied manually. AFAIK, he has been doing
> high-performance programming in Lisp for many years, has published research
> and can be considered an "advanced" user.

See a recent posting to comp.lang.functional for reasons why I do not 
endorse the use of this implementation (which I indeed wrote) for any of 
Jon's purposes.

-- 
Thomas Fischbacher
From: Lars Brinkhoff
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <85y84g667t.fsf@junk.nocrew.org>
Dr. Thomas Fischbacher writes:
> See a recent posting to comp.lang.functional for reasons why I do
> not endorse the use of this implementation (which I indeed wrote)
> for any of Jon's purposes.

I believe this is that posting:

Message-ID: <············@nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com>

http://groups.google.com/group/comp.lang.functional/msg/e48420d41e74f38e
From: Pascal Costanza
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <3s8vqvFmu79pU1@individual.net>
Lars Brinkhoff wrote:
> Dr. Thomas Fischbacher writes:
> 
>>See a recent posting to comp.lang.functional for reasons why I do
>>not endorse the use of this implementation (which I indeed wrote)
>>for any of Jon's purposes.
> 
> I believe this is that posting:
> 
> Message-ID: <············@nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com>
> 
> http://groups.google.com/group/comp.lang.functional/msg/e48420d41e74f38e

Thanks a lot for digging this out. That's an excellent posting!


Pascal

-- 
My website: http://p-cos.net
Closer to MOP & ContextL:
http://common-lisp.net/project/closer/
From: Pascal Bourguignon
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <877jc1yw6t.fsf@thalassa.informatimago.com>
"Cruise Director" <···········@gmail.com> writes:

> Jon Harrop wrote:
>>
>> My work has receieved thousands of responses and has been ported to several
>> other languages. The vast majority of the responses have been positive.
>
> Allow me to provide some neutral comments then.  For those coming late
> to dinner, we're discussing the "performance vs. verbosity" benchmark
> at
> http://www.ffconsultancy.com/free/ray_tracer/languages.html
>
> Your metric for "verbosity" is the Line Of Code.  I notice, however,
> that the provided Lisp and Scheme examples have far fewer characters
> per line, on average, than those of the other languages.  The other
> languages read as big, dense, meaty chunks of code.  I think a metric
> of "total characters" or "function entry points" might be more
> appropriate if you're intending to measure verbosity.  It is true that
> Lisp / Scheme code needs more newlines to be readable.  But that by
> itself doesn't mean it's more verbose; it just means it's harder to
> read.

The first thing to do to show a minimum of honesty in comparing
verbosity, is to compress the sources (cf Chaitin's works if you don't
see why).

$ bzip2 -9 ?/*
$ for f in $(cd 1 ; echo *) ; do \
    for d in ? ; do printf "%6d" $(stat -c %s $d/$f) ; done ;\
    printf "  %s\n" $f; done

  1418  1476  1514  1564  ray.cpp.bz2
  1420  1529  1587  1614  ray.java.bz2
  1409  1552  1628  1653  ray.lisp.bz2
  1101  1174  1219  1247  ray.ml.bz2
  1214  1310  1366  1433  ray.sc.bz2
  1206  1297  1331  1366  ray.sml.bz2

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

In a World without Walls and Fences, 
who needs Windows and Gates?
From: Jon Harrop
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <435ecb8f$0$73604$ed2619ec@ptn-nntp-reader03.plus.net>
Pascal Bourguignon wrote:
> The first thing to do to show a minimum of honesty in comparing
> verbosity, is to compress the sources

Absolutely not! By compressing the data you are removing the redundancy. We
are trying to measure the redundancy.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: ·········@enfocus.be
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <1130316138.345440.47950@g43g2000cwa.googlegroups.com>
Wouldn't you have to divide size / compressed-size then?
Immanuel
From: Jon Harrop
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <435f8d56$0$6500$ed2619ec@ptn-nntp-reader01.plus.net>
·········@enfocus.be wrote:
> Wouldn't you have to divide size / compressed-size then?
> Immanuel

Yes, that is a viable suggestion. However, the amount of code written has
more practical relevance so people prefer to just look at it directly.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Pascal Bourguignon
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <873bmpyuzd.fsf@thalassa.informatimago.com>
Jon Harrop <······@jdh30.plus.com> writes:

> Pascal Bourguignon wrote:
>> The first thing to do to show a minimum of honesty in comparing
>> verbosity, is to compress the sources
>
> Absolutely not! By compressing the data you are removing the redundancy. We
> are trying to measure the redundancy.

Perhaps.  Anyways, have a look at: http://cs.umaine.edu/~chaitin/lisp.html


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
You never feed me.
Perhaps I'll sleep on your face.
That will sure show you.
From: Jon Harrop
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <435edaad$0$73604$ed2619ec@ptn-nntp-reader03.plus.net>
Pascal Bourguignon wrote:
> Jon Harrop <······@jdh30.plus.com> writes:
>> Absolutely not! By compressing the data you are removing the redundancy.
>> We are trying to measure the redundancy.
> 
> Perhaps.  Anyways, have a look at: http://cs.umaine.edu/~chaitin/lisp.html

Cool. :-)

I think it is still interesting to look at the shortest and fastest
implementations that people can write.

If I could spend more time benchmarking languages then I'd come up with a
variety of different tasks designed to show the strengths of different
types of language, let people implement them in different languages,
measure the bytes of source and times taken to solve randomly generated
tasks (verifying the output against a reference implementation). Then I'd
plot the speed vs verbosity and calculate best fits for the different
languages.

That's a lot of work though...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Cruise Director
Subject: Re: verbosity vs. inscrutability
Date: 
Message-ID: <1130302404.184384.223270@f14g2000cwb.googlegroups.com>
Jon Harrop wrote:
>
> If I could spend more time benchmarking languages then I'd come up with a
> variety of different tasks designed to show the strengths of different
> types of language, let people implement them in different languages,
> measure the bytes of source and times taken to solve randomly generated
> tasks (verifying the output against a reference implementation). Then I'd
> plot the speed vs verbosity and calculate best fits for the different
> languages.

Indeed, we had these debates back when I was getting involved with the
Revived Great Language Shootout, back in my OCaml advocacy days.  Then
I realized we did not share marketing or pedagogical objectives, and
that it was a waste of (at least my) time.  Different people want to
weight things differently, and when people are insistent on that, in
the limit everyone has to do their own research.  I've done it
professionally with OpenGL benchmarks, so I know all about how
benchmarks drive the things that people bother to observe.


Cheers,
Brandon
From: jayessay
Subject: Re: CL failure stories?
Date: 
Message-ID: <m3fyqqtgqf.fsf@rigel.goldenthreadtech.com>
Jon Harrop <······@jdh30.plus.com> writes:

> Marco Antoniotti wrote:
> > The point is that the optimum w.r.t. conflicting goals is achieved by CL
> > and not, sorry to point this out, by SML, OCaml et similia, despite the
> > very useful static type checking bit.
> 
> The available quantitative evidence clearly contradicts this:
> 
>   http://www.ffconsultancy.com/free/ray_tracer/languages.html

In one sense it's hard to believe that you would consider this garbage
serious scientific evidence of anything.  In another sense, it's easy
to believe you would and simply sad that you do...


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Lars Brinkhoff
Subject: Re: CL failure stories?
Date: 
Message-ID: <85fyqwa1ni.fsf@junk.nocrew.org>
Pisin Bootvong wrote:
> Jon Harrop wrote:
> > d (Num n) x = Num 0
> > d (Var y) x | x==y = Num 1
> >             | x/=y = Num 0
> > d (Plus e1 e) x = Plus (d e1 x) (d e2 x)
> > d (Times e1 e2) x = Plus (Times e1 (d e2 x)) (Times (d e1 x) e2)
>
> (defmethod d ((n number) var) 0)
> (defmethod d ((x symbol) var) (if (eql x var) 1 0))
> (defmethod d ((expr list) var)
>   (destructuring-bind (op e1 e2) expr
>     (case op
>       (+ `(+ (d e1 var) (d e2 var)))
>       (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))

And, although this is not standard CL, and is untested, and may be
totally incorrect, perhaps someone would like to chime in with a
shameless plug along the lines of:

(defun d (expr var)
  (match-case expr
    (#T(number _)       0)
    (#T(symbol ?sym)    (if (eq ?sym var) 1 0))
    (#T(list + ?e1 ?e2) `(+ ,(d ?e1 var) ,(d ?e2 var)))
    (#T(list * ?e1 ?e2) `(* #|...|#))))
From: Thomas F. Burdick
Subject: Re: CL failure stories?
Date: 
Message-ID: <xcv3bmwe7j5.fsf@conquest.OCF.Berkeley.EDU>
Lars Brinkhoff <·········@nocrew.org> writes:

> And, although this is not standard CL, and is untested, and may be
> totally incorrect, perhaps someone would like to chime in with a
> shameless plug along the lines of:
> 
> (defun d (expr var)
>   (match-case expr
>     (#T(number _)       0)
>     (#T(symbol ?sym)    (if (eq ?sym var) 1 0))
>     (#T(list + ?e1 ?e2) `(+ ,(d ?e1 var) ,(d ?e2 var)))
>     (#T(list * ?e1 ?e2) `(* #|...|#))))

or even this, which is standard CL:

  (defun d (expr var)
    (typecase expr
      (number 0)
      (symbol (if (eq expr var) 1 0))
      ((cons (eql +) list) `(+ ,(d (cadr expr) var) ,(d (caddr expr) var)))
      ((cons (eql *) list) `(* ...))))

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | Free Mumia Abu-Jamal! |
     ,--'    _,'   | Abolish the racist    |
    /       /      | death penalty!        |
   (   -.  |       `-----------------------'
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Richard J. Fateman
Subject: Re: CL failure stories? small diff
Date: 
Message-ID: <435817CD.5060805@eecs.berkeley.edu>
see
http://www.cs.berkeley.edu/~fateman/papers/deriv.pdf
for a discussion of short programs for diff in lisp.
For not much more lines of code you can differentiate
+  and *  of arbitrary numbers of args, as well as sin/cos...

Mr. Harrop's claim about simplification is mostly pointless.
Many simplification programs have been written in lisp.
Some have been written in C for Mathematica. If you want
to use built in Mathematica facilities, then you can write
D[expr,x]    and just use the built-in
differentiation program.

There is another view of differentiation which is
"algorithm differentiation".  See www.autodiff.com
to see how people have done something much more useful,
generally.  But with much greater difficulty than needed. They
are trying to, for example, change a C function f(x)
into another C function which returns both f(x) and f'(x).
The nice part is that this doesn't require simplifying
symbolic expressions.
A program to do this kind of thing can be
done from lisp to lisp, writing in lisp, very neatly.


Oh, I think that Pisin's program can be made into 6
lines, by combining the first two into
(defmethod d( x var) (if eql x var) 1 0))
   ;; everything that is not a list will be treated here..

RJF



Pisin Bootvong wrote:
.... snip

> 
> (defmethod d ((n number) var) 0)
> (defmethod d ((x symbol) var) (if (eql x var) 1 0))
> (defmethod d ((expr list) var)
>   (destructuring-bind (op e1 e2) expr
>     (case op
>       (+ `(+ (d e1 var) (d e2 var)))
>       (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
> 
> That's 7 line of code.
>
From: Jon Harrop
Subject: Re: CL failure stories? small diff
Date: 
Message-ID: <4359b345$0$6509$ed2619ec@ptn-nntp-reader01.plus.net>
Richard J. Fateman wrote:
> Mr. Harrop's claim about simplification is mostly pointless.

There are two points:

Firstly, you can't claim that using eval in a Lisp implementation is great
because eval is built in and then complain when I suggest using Simplify in
a Mathematica implementation because it is built in. Either you allow both
or disallow both.

Secondly, the deriv function only looks one level into the expression so
pattern matching is only a slight advantage. The simplify function will
need to look deeper, so perhaps it would be more interesting to compare
simplify functions written in different languages.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <2x66f.35$pa3.18090@typhoon.nyu.edu>
Why don't I read the posting of other people before answering?

Thanks for your nice reply!

Cheers
--
Marco




Pisin Bootvong wrote:
> Jon Harrop เขียน:
> 
>>Alan Crowe wrote:
>>
>>>"Mark Tarver" <··········@ukonline.co.uk> writes:
>>>
>>>>The battle for Lisp is really lost in
>>>>education.
>>>
>>>See me fighting for Lisp in education on slashdot
>>>
>>>http://slashdot.org/comments.pl?sid=158755&cid=13304138
>>
>>Here's your Lisp from that page:
>>
>>(defparameter plus '+)
>>(defparameter times '*)
>>(defun differentiate (form variable)
>>    (typecase form
>>          (number 0)
>>          (symbol (if (eql form variable) 1 0))
>>          (list (case (first form)
>>              (+ (list plus
>>                   (differentiate (second form) variable)
>>                   (differentiate (third form) variable)))
>>              (* (list plus
>>                   (list times
>>                     (differentiate (second form) variable)
>>                     (third form))
>>                   (list times
>>                     (second form)
>>                     (differentiate (third form) variable))))))))
>>
>>* (differentiate '(* x (* x x)) 'x)
>>
>>(+ (* 1 (* X X)) (* X (+ (* 1 X) (* X 1))))
>>
>>Here are the five equivalent rewrite rules with an extra one to handle
>>powers, written in Mathematica:
>>
>>In[1]:= rules = {d[_?NumericQ, _] :> 0,
>>                 d[x_, x_]        :> 1,
>>                 d[_Symbol, _]    :> 0,
>>                 d[u_ + v_, x_]   :> d[u, x] + d[v, x],
>>                 d[u_ v_, x_]     :> u d[v, x] + v d[u, x],
>>                 d[u_ ^ v_, x_]   :> u^v (v d[u, x]/u + d[v, x] Log[u])};
>>
>>In[2]:= d[x^3, x] //. rules
>>           2
>>Out[2]= 3 x
>>
>>Here is the equivalent OCaml:
>>
>># let rec d : 'expr * string -> 'expr = function
>>    `Num _, _          -> `Num 0
>>  | `Var v, x when v=x -> `Num 1
>>  | `Var _, _          -> `Num 0
>>  | `Plus(e1, e2), x   -> `Plus(d(e1, x), d(e2, x))
>>  | `Times(e1, e2), x  -> `Plus(`Times(e1, d(e2, x)),
>>                                `Times(e2, d(e1, x)));;
>>val d :
>>  ([< `Num of int | `Plus of 'a * 'a | `Times of 'a * 'a | `Var of string
>>    > `Num `Plus `Times ] as 'a) * string -> 'a = <fun>
>>
>># d(`Times(`Var "x", `Times(`Var "x", `Var "x")), "x");;
>>- : [ `Num of int | `Plus of 'a * 'a | `Times of 'a * 'a | `Var ofstring ]
>>    as 'a
>>=
>>`Plus
>>  (`Times
>>     (`Var "x", `Plus (`Times (`Var "x", `Num 1),
>>                       `Times (`Var "x", `Num 1))),
>>   `Times (`Times (`Var "x", `Var "x"), `Num 1))
>>
>>Here is the equivalent SML:
>>
>>- datatype expr =
>>      Num of int
>>    | Var of string
>>    | Plus of expr * expr
>>    | Times of expr * expr;
>>
>>>New type names: =expr
>>
>>  datatype expr =
>>  (expr,
>>   {con Num : int -> expr,
>>    con Plus : expr * expr -> expr,
>>    con Times : expr * expr -> expr,
>>    con Var : string -> expr})
>>  con Num = fn : int -> expr
>>  con Plus = fn : expr * expr -> expr
>>  con Times = fn : expr * expr -> expr
>>  con Var = fn : string -> expr
>>
>>- fun d(Num _, _)         = Num 0
>>    | d(Var v, x)         = Num (if v=x then 1 else 0)
>>    | d(Plus(e1, e2), x)  = Plus(d(e1, x), d(e2, x))
>>    | d(Times(e1, e2), x) = Plus(Times(e1, d(e2, x)), Times(e2, d(e1, x)));
>>
>>>val d = fn : expr * string -> expr
>>
>>- d(Times(Var "x", Times(Var "x", Var "x")), "x");;
>>
>>>val it =
>>
>>    Plus(Times(Var "x", Plus(Times(Var "x", Num 1), Times(Var "x", Num1))),
>>         Times(Times(Var "x", Var "x"), Num 1)) : expr
>>
>>Here is the equivalent Haskell:
>>
>>data Expr =
>>      Num Int
>>    | Var String
>>    | Add Expr Expr
>>    | Mul Expr Expr
>>
>>d (Num n) x = Num 0
>>d (Var y) x | x==y = Num 1
>>            | x/=y = Num 0
>>d (Plus e1 e) x = Plus (d e1 x) (d e2 x)
>>d (Times e1 e2) x = Plus (Times e1 (d e2 x)) (Times (d e1 x) e2)
>>
>>d (Times (Var "x") (Times (Var "x") (Var "x"))) "x"
>>
>>Given that all of these implementations are shorter and clearer than your
>>Lisp, can you please explain why you are advocating Lisp for this task?
>>
>>--
>>Dr Jon D Harrop, Flying Frog Consultancy
>>http://www.ffconsultancy.com
> 
> 
> By:
>  1.) Reduce meaningful variable names ('variable', 'form') to ('v',
> 'expr'), which could be either good or bad depending on the context.
>  2.) Use generic function to make pattern matching clearer.
>  3.) Make method returning trivial expression a one-liner. It doesn't
> break the style too much and still is readable.
>  3.) Use Backquote for list construction, so that the result return
> form looks clearer.
>  4.) Optionally, Use '+ and '* as symbol directly without defparameter.
> It's possible to put quoted '+' in returned expression and IMO it is
> clearer that way. Mathematica version also does this.
> 
> (defmethod d ((n number) var) 0)
> (defmethod d ((x symbol) var) (if (eql x var) 1 0))
> (defmethod d ((expr list) var)
>   (destructuring-bind (op e1 e2) expr
>     (case op
>       (+ `(+ (d e1 var) (d e2 var)))
>       (* `(+ (* ,(d e1 var) ,e2) (* ,e1 ,(d e2 var)))))))
> 
> That's 7 line of code.
> 
> Out of these languages, only Lisp seems to allow you to directly
> evaluate the derivative's result. I don't know about Mathematica, but
> for other solution it doens't seem so.
> 
> You can do this in Lisp:
> 
> CL-USER> (d '(* x x) 'x)
>  ==> (+ (* 1 X) (* X 1))
> CL-USER> (eval `(let ((x 2)) ,*))
>  ==> 4
> 
> I think there is also eval in Haskell and others, but I would like to
> see how the code will change regarding the added requirement.
> 
From: Alan Crowe
Subject: Re: CL failure stories?
Date: 
Message-ID: <86u0fcfnl6.fsf@cawtech.freeserve.co.uk>
Jon Harrop <······@jdh30.plus.com> writes:
> Here's your Lisp from that page:

Some of it. Have a think about what got left behind. It is
still at

http://slashdot.org/comments.pl?sid=158755&cid=13304138

> Given that all of these implementations are shorter and clearer than your
> Lisp, can you please explain why you are advocating Lisp for this task?
                                                               ^^^^

You are pulling the ambiguous pronoun trick on me. What does
"this" refer to? My post was advocating something
interesting. With your cunning use of language you have
stomped all over the interesting points, reducing "the task"
to "write slick code for symbolic differentiation".

Try this as a definition of the task:

1)Help students internalise the fact that commutation of the diagram


               symbolic manipulation
    form ------------------------------> form
     |                                     |
     | numerical computation               | numerical computation
     |                                     |
     V                                     V
    value ============================== value
               watch out for
             inexact arithmetic

determines which symbolic manipulations are valid.

Alan Crowe
Edinburgh
Scotland
From: Pascal Bourguignon
Subject: Re: CL failure stories?
Date: 
Message-ID: <87r7ag3v5c.fsf@thalassa.informatimago.com>
Jon Harrop <······@jdh30.plus.com> writes:

> Alan Crowe wrote:
>> "Mark Tarver" <··········@ukonline.co.uk> writes:
>>> The battle for Lisp is really lost in
>>> education.
>> 
>> See me fighting for Lisp in education on slashdot
>> 
>> http://slashdot.org/comments.pl?sid=158755&cid=13304138
>
> Here's your Lisp from that page:
>
> (defparameter plus '+)
> (defparameter times '*)
> (defun differentiate (form variable)
>     (typecase form
>           (number 0)
>           (symbol (if (eql form variable) 1 0))
>           (list (case (first form)
>               (+ (list plus
>                    (differentiate (second form) variable)
>                    (differentiate (third form) variable)))
>               (* (list plus
>                    (list times
>                      (differentiate (second form) variable)
>                      (third form))
>                    (list times
>                      (second form)
>                      (differentiate (third form) variable))))))))
>
> * (differentiate '(* x (* x x)) 'x)
>
> (+ (* 1 (* X X)) (* X (+ (* 1 X) (* X 1))))
>
> Here are the five equivalent rewrite rules with an extra one to handle
> powers, written in Mathematica:
> [...some line noise...]
> Given that all of these implementations are shorter and clearer than your
> Lisp, can you please explain why you are advocating Lisp for this task?

I can't say, I've been doing lisp intensively for 5 years now, the rest looks
to me like line noise.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

The world will now reboot.  don't bother saving your artefacts.
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <1v66f.34$pa3.18090@typhoon.nyu.edu>
Jon Harrop wrote:
> Alan Crowe wrote:
> 
>>"Mark Tarver" <··········@ukonline.co.uk> writes:
>>
>>>The battle for Lisp is really lost in
>>>education.
>>
>>See me fighting for Lisp in education on slashdot
>>
>>http://slashdot.org/comments.pl?sid=158755&cid=13304138
> 
> 
> Here's your Lisp from that page:
> 
> (defparameter plus '+)
> (defparameter times '*)
> (defun differentiate (form variable)
>     (typecase form
>           (number 0)
>           (symbol (if (eql form variable) 1 0))
>           (list (case (first form)
>               (+ (list plus
>                    (differentiate (second form) variable)
>                    (differentiate (third form) variable)))
>               (* (list plus
>                    (list times
>                      (differentiate (second form) variable)
>                      (third form))
>                    (list times
>                      (second form)
>                      (differentiate (third form) variable))))))))
> 
> * (differentiate '(* x (* x x)) 'x)
> 
> (+ (* 1 (* X X)) (* X (+ (* 1 X) (* X 1))))
> 
> Here are the five equivalent rewrite rules with an extra one to handle
> powers, written in Mathematica:
> 
> In[1]:= rules = {d[_?NumericQ, _] :> 0,
>                  d[x_, x_]        :> 1,
>                  d[_Symbol, _]    :> 0, 
>                  d[u_ + v_, x_]   :> d[u, x] + d[v, x], 
>                  d[u_ v_, x_]     :> u d[v, x] + v d[u, x],
>                  d[u_ ^ v_, x_]   :> u^v (v d[u, x]/u + d[v, x] Log[u])};
> 
> In[2]:= d[x^3, x] //. rules
>            2
> Out[2]= 3 x
> 
> Here is the equivalent OCaml:
> 
> # let rec d : 'expr * string -> 'expr = function
>     `Num _, _          -> `Num 0
>   | `Var v, x when v=x -> `Num 1
>   | `Var _, _          -> `Num 0
>   | `Plus(e1, e2), x   -> `Plus(d(e1, x), d(e2, x))
>   | `Times(e1, e2), x  -> `Plus(`Times(e1, d(e2, x)),
>                                 `Times(e2, d(e1, x)));; 
> val d :
>   ([< `Num of int | `Plus of 'a * 'a | `Times of 'a * 'a | `Var of string
>     > `Num `Plus `Times ] as 'a) * string -> 'a = <fun>
> 
> # d(`Times(`Var "x", `Times(`Var "x", `Var "x")), "x");;
> - : [ `Num of int | `Plus of 'a * 'a | `Times of 'a * 'a | `Var ofstring ]
>     as 'a
> =
> `Plus
>   (`Times
>      (`Var "x", `Plus (`Times (`Var "x", `Num 1),
>                        `Times (`Var "x", `Num 1))),
>    `Times (`Times (`Var "x", `Var "x"), `Num 1))
> 
> Here is the equivalent SML:
> 
> - datatype expr =
>       Num of int
>     | Var of string
>     | Plus of expr * expr
>     | Times of expr * expr;
> 
>>New type names: =expr
> 
>   datatype expr =
>   (expr,
>    {con Num : int -> expr,
>     con Plus : expr * expr -> expr,
>     con Times : expr * expr -> expr,
>     con Var : string -> expr})
>   con Num = fn : int -> expr
>   con Plus = fn : expr * expr -> expr
>   con Times = fn : expr * expr -> expr
>   con Var = fn : string -> expr
> 
> - fun d(Num _, _)         = Num 0
>     | d(Var v, x)         = Num (if v=x then 1 else 0)
>     | d(Plus(e1, e2), x)  = Plus(d(e1, x), d(e2, x))
>     | d(Times(e1, e2), x) = Plus(Times(e1, d(e2, x)), Times(e2, d(e1, x)));
> 
>>val d = fn : expr * string -> expr
> 
> 
> - d(Times(Var "x", Times(Var "x", Var "x")), "x");;
> 
>>val it =
> 
>     Plus(Times(Var "x", Plus(Times(Var "x", Num 1), Times(Var "x", Num1))),
>          Times(Times(Var "x", Var "x"), Num 1)) : expr
> 
> Here is the equivalent Haskell:
> 
> data Expr =
>       Num Int
>     | Var String
>     | Add Expr Expr
>     | Mul Expr Expr
> 
> d (Num n) x = Num 0
> d (Var y) x | x==y = Num 1
>             | x/=y = Num 0
> d (Plus e1 e) x = Plus (d e1 x) (d e2 x)
> d (Times e1 e2) x = Plus (Times e1 (d e2 x)) (Times (d e1 x) e2)
> 
> d (Times (Var "x") (Times (Var "x") (Var "x"))) "x"
> 
> Given that all of these implementations are shorter and clearer than your
> Lisp, can you please explain why you are advocating Lisp for this task?
> 

Becuse (1) shortness is the hobgoblin of ... and (2) your Lisp version 
is neither "optimal" nor "elegant" in terms of flexibility and clarity. 
  Plus, the OCaml and SML versions are actually longer and much more 
involved than the CL one.

Cheers
--
Marco
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <4359aea3$0$6509$ed2619ec@ptn-nntp-reader01.plus.net>
Marco Antoniotti wrote:
> Becuse (1) shortness is the hobgoblin of ... and (2) your Lisp version 
> is neither "optimal" nor "elegant" in terms of flexibility and clarity.

Alan Crowe wrote the Lisp, not me.

>   Plus, the OCaml and SML versions are actually longer and much more
> involved than the CL one.

How is this longer and more involved:

let rec d = function
    `Num _, _               -> `Num 0.
  | `Var v, x               -> `Num (if v=x then 1. else 0.)
  | `Op2(`Plus, e1, e2), x  -> `Op2(`Plus, d(e1, x), d(e2, x))
  | `Op2(`Times, e1, e2), x -> `Op2(`Plus, `Op2(`Times, e1, d(e2, x)),
                                           `Op2(`Times, e2, d(e1, x)))

than this:

(defparameter plus '+)
(defparameter times '*)
(defun differentiate (form variable)
    (typecase form
          (number 0)
          (symbol (if (eql form variable) 1 0))
          (list (case (first form)
              (+ (list plus
                   (differentiate (second form) variable)
                   (differentiate (third form) variable)))
              (* (list plus
                   (list times
                     (differentiate (second form) variable)
                     (third form))
                   (list times
                     (second form)
                     (differentiate (third form) variable))))))))

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Marco Antoniotti
Subject: Re: CL failure stories?
Date: 
Message-ID: <bQ57f.39$pa3.19149@typhoon.nyu.edu>
Jon Harrop wrote:
> Marco Antoniotti wrote:
> 
>>Becuse (1) shortness is the hobgoblin of ... and (2) your Lisp version 
>>is neither "optimal" nor "elegant" in terms of flexibility and clarity.
> 
> 
> Alan Crowe wrote the Lisp, not me.
> 
> 

But it is you who writes "short" replies :)


>>  Plus, the OCaml and SML versions are actually longer and much more
>>involved than the CL one.
> 
> 
> How is this longer and more involved:

It is not longer, but it is more involved and it does not take into 
account the shorter (and more elegant, *and* more simply extensible) CL 
version.

Cheers
--
Marco





> 
> let rec d = function
>     `Num _, _               -> `Num 0.
>   | `Var v, x               -> `Num (if v=x then 1. else 0.)
>   | `Op2(`Plus, e1, e2), x  -> `Op2(`Plus, d(e1, x), d(e2, x))
>   | `Op2(`Times, e1, e2), x -> `Op2(`Plus, `Op2(`Times, e1, d(e2, x)),
>                                            `Op2(`Times, e2, d(e1, x)))
> 
> than this:
> 
> (defparameter plus '+)
> (defparameter times '*)
> (defun differentiate (form variable)
>     (typecase form
>           (number 0)
>           (symbol (if (eql form variable) 1 0))
>           (list (case (first form)
>               (+ (list plus
>                    (differentiate (second form) variable)
>                    (differentiate (third form) variable)))
>               (* (list plus
>                    (list times
>                      (differentiate (second form) variable)
>                      (third form))
>                    (list times
>                      (second form)
>                      (differentiate (third form) variable))))))))
> 
From: Geoffrey Summerhayes
Subject: Re: CL failure stories?
Date: 
Message-ID: <a8w5f.10956$ns3.818222@news20.bellglobal.com>
"Mark Tarver" <··········@ukonline.co.uk> wrote in message 
····························@g43g2000cwa.googlegroups.com...
> One of the biggest failures of Lisp in the UK is to
> make it as the language of choice for teaching
> UGs functional programming.

I can't really see this as a failing of Lisp, it's like complaining
that a Swiss army knife is defective because it does more than just
whittle.

> There are several reasons why.  One reason is that
> people can more easily write horrible looking procedural programs
> in Lisp than in any other FPL.  The language abounds
> in procedural constructions, GO, FOR, LOOP and
> expressions for stuffing values into places.  All useful
> in certain contexts no doubt, but open to abuse.  I
> see people writing FOR loops instead of using tail recursion.
> Students who lack discipline can produce some horrendous
> stuff in CL.  Self-taught programmers can fail to learn
> to really understand things like data abstraction in CL.

Lisp is *not* a functional language, it *is* a language that
allows people to write functional programs as well as other
paradigms. If you want to force FP you have to restrict what
parts of the language the programmers are allowed to use and
how and then enforce it.

> ML and related languages do enforce more discipline on
> the novice.  Later they can irritate when you want to let
> go of mummy's hand and CL becomes attractive.  At
> the beginning though, its good discipline.
>
> The other thing that kills Lisp as a UG language is the
> syntax. Simple in BNF terms sure, but lacking pattern-directed
> programming and unlike Prolog, the symbols need quoting -
> people regard it as old fashioned.  Type checking comes
> up as an issue too.

Right, how's this for 'unquoted' Prolog?

programming_language(FORTRAN).
programming_language(Lisp).
programming_language(Prolog).
programming_language(sml).

Notice anything amiss?

> There's lots of positive things I could say, but the header says
> "CL failure stories" and this is the biggest.  It has knock-on
> effect on the whole Lisp industry.  No UG Lisp hackers means
> no domestic Lisp industry (no people for the jobs, so employers
> dont put out the jobs).  The battle for Lisp is really lost in
> education.
>

I bought a fridge because I needed ice cubes but the wife keeps
throwing food in it and the damn thing is way too big when you
consider the size of the ice cubes it pumps out. It's like that
bit in 'Back To The Future Part III'.

--
Geoff
From: Juliusz Chroboczek
Subject: Re: CL failure stories?
Date: 
Message-ID: <7iu0fbsthr.fsf@lanthane.pps.jussieu.fr>
"Mark Tarver" <··········@ukonline.co.uk>:

> One of the biggest failures of Lisp [...] is to make it as the
> language of choice for teaching [undergratuate students] functional
> programming.

> There are several reasons why.

Spot on, but I'd like to expand on that.

You have to know what a programming course is like.  Where I live,
it's usually one lecturer and 2 to 5 groups for practicals.

During a practical, you find yourself with 30 students on 30 machines
writing a program that is calibrated to be just a little bit too
difficult for most of them.  There are some quiet moments, but most of
the time you've got a queue of five raised hands.

And the most common question is ``it doesn't work''.  At which point
you've got to help the student debug his code, and be speedy enough to
prevent the other students from getting frustrated and thinking about
something else.  A fascist compiler really helps.

I use Lisp for most of the programming I do myself -- but I'd never
dream of using it for any non-trivial programming projects with
undergrads.  The only way to change that is to double the teaching
staff -- but is society willing to foot the bill?

                                        Juliusz
From: David Steuber
Subject: Re: CL failure stories?
Date: 
Message-ID: <874q7a7r1s.fsf@david-steuber.com>
Juliusz Chroboczek <···@pps.jussieu.fr> writes:

> I use Lisp for most of the programming I do myself -- but I'd never
> dream of using it for any non-trivial programming projects with
> undergrads.  The only way to change that is to double the teaching
> staff -- but is society willing to foot the bill?

Isn't knowing the language and knowing how to program two different
things?  For example, I know C pretty well.  That doesn't necessarily
mean that I can create the next great window manager for X11.

-- 
http://www.david-steuber.com/
The UnBlog: An island of conformity in a sea of quirks.
The lowest click through rate in Google's AdSense program.
----------------------------------------------------------------------
From: Juliusz Chroboczek
Subject: Re: CL failure stories?
Date: 
Message-ID: <7ihdb59udk.fsf@lanthane.pps.jussieu.fr>
Me:

>> I use Lisp for most of the programming I do myself -- but I'd never
>> dream of using it for any non-trivial programming projects with
>> undergrads.  The only way to change that is to double the teaching
>> staff -- but is society willing to foot the bill?

David Steuber <·····@david-steuber.com>:

> Isn't knowing the language and knowing how to program two different
> things?  For example, I know C pretty well.  That doesn't necessarily
> mean that I can create the next great window manager for X11.

I'm not sure I see your point here.  I was trying to explain why we
don't use Lisp in teaching, but instead use languages with more
fascist compilers such as ML or Java.

The question of whether we manage to actually teach programming or
merely teach programming languages is interesting, but completely
off-topic in this discussion.

                                        Juliusz
From: David Steuber
Subject: Re: CL failure stories?
Date: 
Message-ID: <87d5ltyll4.fsf@david-steuber.com>
Juliusz Chroboczek <···@pps.jussieu.fr> writes:

> Me:
> 
> >> I use Lisp for most of the programming I do myself -- but I'd never
> >> dream of using it for any non-trivial programming projects with
> >> undergrads.  The only way to change that is to double the teaching
> >> staff -- but is society willing to foot the bill?
> 
> David Steuber <·····@david-steuber.com>:
> 
> > Isn't knowing the language and knowing how to program two different
> > things?  For example, I know C pretty well.  That doesn't necessarily
> > mean that I can create the next great window manager for X11.
> 
> I'm not sure I see your point here.  I was trying to explain why we
> don't use Lisp in teaching, but instead use languages with more
> fascist compilers such as ML or Java.
> 
> The question of whether we manage to actually teach programming or
> merely teach programming languages is interesting, but completely
> off-topic in this discussion.

I guess I missed your point then.

How do more fascist compilers help with undergrads doing non-trivial
programming projects?  How is that not the same as teaching
programming?

-- 
http://www.david-steuber.com/
The UnBlog: An island of conformity in a sea of quirks.
The lowest click through rate in Google's AdSense program.
----------------------------------------------------------------------
From: Ulrich Hobelmann
Subject: Re: CL failure stories?
Date: 
Message-ID: <3s8iqaFn5rk3U3@individual.net>
David Steuber wrote:
> How do more fascist compilers help with undergrads doing non-trivial
> programming projects?  How is that not the same as teaching
> programming?

Sometime ago I stumbled over an introductory class using I believe 
Haskell.  The disadvantage is that it won't let you execute anything 
that's not correctly typed.  The advantage in a programming class is 
that it forces students to think about types (and thus program 
structure) instead of just typing some expressions in a try-and-repeat 
loop.  Later on, for actual programming, Lisp might be as well suited, 
but the beginning discipline doesn't hurt, IMHO.

-- 
Blessed are the young for they shall inherit the national debt.
	Herbert Hoover
From: Jon Harrop
Subject: Re: CL failure stories?
Date: 
Message-ID: <435f2a75$0$73588$ed2619ec@ptn-nntp-reader03.plus.net>
Ulrich Hobelmann wrote:
> Sometime ago I stumbled over an introductory class using I believe
> Haskell.  The disadvantage is that it won't let you execute anything
> that's not correctly typed.  The advantage in a programming class is
> that it forces students to think about types (and thus program
> structure) instead of just typing some expressions in a try-and-repeat
> loop.  Later on, for actual programming, Lisp might be as well suited,
> but the beginning discipline doesn't hurt, IMHO.

Perhaps I have grown accustomed to having the compiler pick up obvious
errors and force me to correct them.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
From: Robert Strandh
Subject: Re: CL failure stories?
Date: 
Message-ID: <6wvezk85qk.fsf@serveur5.labri.fr>
Ulrich Hobelmann <···········@web.de> writes:

> Sometime ago I stumbled over an introductory class using I believe
> Haskell.  The disadvantage is that it won't let you execute anything
> that's not correctly typed.  The advantage in a programming class is
> that it forces students to think about types (and thus program
> structure) instead of just typing some expressions in a try-and-repeat
> loop.  Later on, for actual programming, Lisp might be as well suited,
> but the beginning discipline doesn't hurt, IMHO.

While that may seem obvious to you, it is not to me.  Try substituting
"indentation" for "typing", and you get something like: 

= Sometime ago I stumbled over an introductory class using I believe
= Python.  The disadvantage is that it won't let you execute anything
= that's not correctly indented.  The advantage in a programming class is
= that it forces students to think about indentation (and thus program
= structure) instead of just typing some expressions in a try-and-repeat
= loop.  Later on, for actual programming, Lisp might be as well suited,
= but the beginning discipline doesn't hurt, IMHO.

Looks reasonable? 

Now try substituting "using long variable names" or "using variable
names that exist in a dictionary" or just about any other
restriction, and you will see that it is not trivially true that "the
beginning discipline doesn't hurt".  

For the "discipline" not to "hurt" the restriction must be widely
agreed upon to be a good one.  And we still don't agree with respect
to static typing.

Some of my colleagues often fall into similar traps by thinking that
forcing students to be disciplined is automatically good.  Thus we
have had aberrations in the past such as "desk compiling" code even
though compilers are readily available, or suggesting everyone should
use punch cards in the beginning, or forcing students not to type a
single line of code until the design is done. 

-- 
Robert Strandh
From: ······@earthlink.net
Subject: Re: CL failure stories?
Date: 
Message-ID: <1130358595.573494.66140@f14g2000cwb.googlegroups.com>
Robert Strandh wrote:
> = Sometime ago I stumbled over an introductory class using I believe
> = Python.  The disadvantage is that it won't let you execute anything
> = that's not correctly indented.  The advantage in a programming class is
> = that it forces students to think about indentation (and thus program
> = structure) instead of just typing some expressions in a try-and-repeat
> = loop.  Later on, for actual programming, Lisp might be as well suited,
> = but the beginning discipline doesn't hurt, IMHO.
>
> Looks reasonable?

Apart from the "beginning discipline" part, yes, but it took me a while
to learn it.

A huge part of human brains is devoted to pattern recognition/analysis.
Consistent indentation lets one use that part to understand programs.
Inconsistent indentation can make that part work against said
understanding.

Try it for yourself.  Print out several pages of a well-indented
program,
arrange them in a rough square on a table or the floor, and step back
until
you can't read the tokens.  You'll see things about the program from
just
its shape.  You can use this idea to help debug, refactor, and/or
extend.

Folding/outline based code editors try to leverage some of the same
mechanisms.
From: Pascal Costanza
Subject: Re: CL failure stories?
Date: 
Message-ID: <3sadcaFn3qfhU1@individual.net>
······@earthlink.net wrote:
> Robert Strandh wrote:
> 
>>= Sometime ago I stumbled over an introductory class using I believe
>>= Python.  The disadvantage is that it won't let you execute anything
>>= that's not correctly indented.  The advantage in a programming class is
>>= that it forces students to think about indentation (and thus program
>>= structure) instead of just typing some expressions in a try-and-repeat
>>= loop.  Later on, for actual programming, Lisp might be as well suited,
>>= but the beginning discipline doesn't hurt, IMHO.
>>
>>Looks reasonable?
> 
> 
> Apart from the "beginning discipline" part, yes, but it took me a while
> to learn it.
> 
> A huge part of human brains is devoted to pattern recognition/analysis.
> Consistent indentation lets one use that part to understand programs.
> Inconsistent indentation can make that part work against said
> understanding.
> 
> Try it for yourself.  Print out several pages of a well-indented
> program,
> arrange them in a rough square on a table or the floor, and step back
> until
> you can't read the tokens.  You'll see things about the program from
> just
> its shape.  You can use this idea to help debug, refactor, and/or
> extend.
> 
> Folding/outline based code editors try to leverage some of the same
> mechanisms.

This may all be true, but doesn't require a language that "forces 
students to think about identation". In Lisp, you get that for free...


Pascal

-- 
My website: http://p-cos.net
Closer to MOP & ContextL:
http://common-lisp.net/project/closer/
From: Ulrich Hobelmann
Subject: Re: CL failure stories?
Date: 
Message-ID: <3saamcFn6dccU1@individual.net>
Robert Strandh wrote:
> Ulrich Hobelmann <···········@web.de> writes:
> 
>> Sometime ago I stumbled over an introductory class using I believe
>> Haskell.  The disadvantage is that it won't let you execute anything
>> that's not correctly typed.  The advantage in a programming class is
>> that it forces students to think about types (and thus program
>> structure) instead of just typing some expressions in a try-and-repeat
>> loop.  Later on, for actual programming, Lisp might be as well suited,
>> but the beginning discipline doesn't hurt, IMHO.
> 
> While that may seem obvious to you, it is not to me.  Try substituting
> "indentation" for "typing", and you get something like: 
> 
> = Sometime ago I stumbled over an introductory class using I believe
> = Python.  The disadvantage is that it won't let you execute anything
> = that's not correctly indented.  The advantage in a programming class is
> = that it forces students to think about indentation (and thus program
> = structure) instead of just typing some expressions in a try-and-repeat
> = loop.  Later on, for actual programming, Lisp might be as well suited,
> = but the beginning discipline doesn't hurt, IMHO.
> 
> Looks reasonable? 

Harhar, good ugly joke :)
(even though you are serious)

> Now try substituting "using long variable names" or "using variable
> names that exist in a dictionary" or just about any other
> restriction, and you will see that it is not trivially true that "the
> beginning discipline doesn't hurt".  
> 
> For the "discipline" not to "hurt" the restriction must be widely
> agreed upon to be a good one.  And we still don't agree with respect
> to static typing.

Ok, let me explain differently.  I'm not saying static typing rules and 
everybody should use it.  I just think:
* it's useful to learn, not the least because mainstream languages like 
Java and C# are typed, and badly (so a good type system makes a good 
contrast)
* it forces students to think not about program structure/indentation ;) 
but about the structure of data flow in their program; for a beginner 
I'd consider that valuable

OTOH that's just me, and I don't have any right to force my views on 
anybody.  OTOH most schools aren't about freedom, so they force you to 
do useless crap all the time anyway :(

> Some of my colleagues often fall into similar traps by thinking that
> forcing students to be disciplined is automatically good.  Thus we
> have had aberrations in the past such as "desk compiling" code even
> though compilers are readily available, or suggesting everyone should

That's a good example as that's a typical exercise for compiler or CPU 
architecture classes.

> use punch cards in the beginning, or forcing students not to type a
> single line of code until the design is done. 
> 


-- 
The road to hell is paved with good intentions.
From: Pascal Costanza
Subject: Re: CL failure stories?
Date: 
Message-ID: <3saeclFn4r42U1@individual.net>
Ulrich Hobelmann wrote:

> Ok, let me explain differently.  I'm not saying static typing rules and 
> everybody should use it.  I just think:
> * it's useful to learn, not the least because mainstream languages like 
> Java and C# are typed, and badly (so a good type system makes a good 
> contrast)
> * it forces students to think not about program structure/indentation ;) 
> but about the structure of data flow in their program; for a beginner 
> I'd consider that valuable

I don't think that this is related to the type system as such. The type 
system just enforces certain restrictive aesthetics, and all kinds of 
restrictive aesthetics make you think harder about what you are trying 
to do. You can achieve the same effect with almost any kind of arbitrary 
rule.

Artists know that, and do that regularly to achieve "interesting 
results". Think about twelve-tone or minimalist music, for example, or 
the various structures or conventions that are followed and/or sometimes 
artifically changed in various art forms.

You could think about similar artifical restrictions in the case of 
programming. Say, you could force yourself to make sure that each and 
every function in your program is _exactly_ 7 lines long, not less, not 
more. This will probably force you to think about what you are really 
trying to do in your program as well.

As anecdotal evidence, Richard Gabriel had a group of people do an 
exercise at one of the "Pattern Languages of Programs" conferences. The 
idea was that, on average, the thumbnails of design patterns consist of 
53 (or so) words. So he had given them the task to rewrite the 
thumbnails of those patterns so that they all had _exactly_ 53 words. 
One of the participants of that experiment told me that this indeed made 
the group understand those patterns a lot better than before, because it 
made them think hard about what parts of the description are really 
essential, and what is just redundant or unnecessary.

I myself experience this effect when my papers are somewhat longer than 
the maximal number of pages accepted by a conference or workshop. 
Cutting the paper down so that it gets below the maximal number of pages 
typically helps to sharpen my understanding of what I have written.

You hear fans of Hindley-Milner-style type systems typically say that 
the type system makes them write "correct" programs. I think this is 
basically because of that same (or similar) effect, not primarily 
because of the types as such. (This is just a guess, I cannot prove 
this, of course.)

The problem with such type systems is that you cannot change the rules.


Pascal

-- 
My website: http://p-cos.net
Closer to MOP & ContextL:
http://common-lisp.net/project/closer/
From: Raffael Cavallaro
Subject: Re: CL failure stories?
Date: 
Message-ID: <2005102700541016807%raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2005-10-26 18:30:43 -0400, Pascal Costanza <··@p-cos.net> 
insightfully opined:

> I don't think that this is related to the type system as such. The type 
> system just enforces certain restrictive aesthetics, and all kinds of 
> restrictive aesthetics make you think harder about what you are trying 
> to do. You can achieve the same effect with almost any kind of 
> arbitrary rule.
> 
> Artists know that, and do that regularly to achieve "interesting 
> results". Think about twelve-tone or minimalist music, for example, or 
> the various structures or conventions that are followed and/or 
> sometimes artifically changed in various art forms.

Picasso famously said "forcing yourself to use restricted means is the 
sort of restraint that liberates invention.� It obliges you to make a 
kind of progress that you can't even imagine in advance."

[interesting examples elided]

> You hear fans of Hindley-Milner-style type systems typically say that 
> the type system makes them write "correct" programs. I think this is 
> basically because of that same (or similar) effect, not primarily 
> because of the types as such. (This is just a guess, I cannot prove 
> this, of course.)

I for one believe that you are right in this conjecture.

> The problem with such type systems is that you cannot change the rules.

Exactly. Better far to use a language where you can choose which means 
to restrict and which to allow creative freedom.

regards
From: Pascal Costanza
Subject: Re: CL failure stories?
Date: 
Message-ID: <3set37Fo23ahU2@individual.net>
Raffael Cavallaro wrote:

> Picasso famously said "forcing yourself to use restricted means is the 
> sort of restraint that liberates invention.  It obliges you to make a 
> kind of progress that you can't even imagine in advance."

Nice quote.

>> You hear fans of Hindley-Milner-style type systems typically say that 
>> the type system makes them write "correct" programs. I think this is 
>> basically because of that same (or similar) effect, not primarily 
>> because of the types as such. (This is just a guess, I cannot prove 
>> this, of course.)
> 
> I for one believe that you are right in this conjecture.

Hey, then we are at least two! ;)


Pascal

-- 
My website: http://p-cos.net
Closer to MOP & ContextL:
http://common-lisp.net/project/closer/