From: ···········@gmail.com
Subject: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <1120020938.236039.201230@g14g2000cwa.googlegroups.com>
Hi all, I'm a LispNYC guy.  We just recently got 9 projects accepted by
google for the Summer of Code, but we have some slack mentoring
capacity, and a couple of really important projects to push.  So I
thought it might be a good idea to try to grab up some of the talent
that did not get selected for google SOC, and try to match the talent
with mentors with more brains than time.  This is the announcement I
will post to the google summer of code discussion board, and a few
other places.  I would love to have your comments, criticism, and
project suggestions.

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

Coders of summer!
Some of you might be considering at this very moment what to do with
the rest of your vacation season.  If so, you should apply to the
LispNYC Summer of Lisp!

We are not giving away money, but we offer the chance to work with
great mentors on terribly exciting projects that will knock the socks
off of prospective employers, fellow coders, and in some alternate
realities, cuties at bars. There is even a job involved for one lucky
applicant!  For details, read on!

What was that about a job?
Franz, a major lisp implementation vendor, wants to embed a lispy
language in Firefox as a plugin, so that people who want dynamic
webpages can use something more powerful than javascript.  They want to
get someone to work on it for the summer with them out in California.
We like the project, so we are helping them get the word out.

Where can I apply?
http://www.lispnyc.org/summerofcode.html

Who is eligible?
Anyone.  We will give preference, all other things being equal, to
those who commit to a lot of time, so students have a leg up there. But
anyone may apply.

Can groups join?
Of course!

If you are not giving money, then what do I get?
I run a lot better with a coach, even if he knows nothing about
running.  Just having someone to help keep me motivated is a win.  We
will mentor you, direct you toward problems that we know are a big
deal, and get the fruits of your labor more exposure after the project
is done.  That, and T-shirts.

Will you guys teach me lisp?
Nope.  But we will point you in the right direction for teaching
yourself.

What kind of projects are available?
Compiler optimization, new language implementations both of and in
lisp, libraries, porting, and whole new applications.  Computer
scientists found out a while back that the lambda calculus is good for
just about everything, and lisp is one of the better approximations to
the lambda calculus out there, so it is pretty good at just about any
domain you care to consider.

I have a project that I would like to suggest and mentor.
Great!  We can always use more.

Why are you doing this?
Read The Long Tail.  There are lots of projects that are not worth
spending a bunch of money on, because they are too risky, or not a big
enough win.  They are still a good thing to do, both for the
implementor and for the world at large.  We think there are a ton of
potential mentors in the lisp commmunity, and a ton of people who would
gain greatly from exposure to the language and the projects, and we
would like to release that value into the world.

Why should I do it?
Because you can.


Why Lisp?
You can stretch your brain by learning hard new things that don't have
lots of garbage in them.  Memorizing the Manhattan phonebook or the
java class library specifications does not make you smarter, but
studying physics probably does.  Lisp is a little bit like physics in
that does not require much memorization of detail but has lots of big
new concepts, many of which are present in diluted form in more common
disciplines/languages.

No really.  Why lisp?

A programming language is a lot of things, but the most central is a
pipe between your messy, biological, analog head and certain kinds of
discrete math, usually machine code.  Some programming languages, like
C, assembler, and fortran, are very close to the actual details of some
actual machine.  Others, like Haskell, Python, perl, and so forth, are
much farther away from any actual machine, but are closer to your head,
and so are a lot easier to program in, because you can think more about
your problem and less about the machine.  The lisp family of languages
is unique in the degree to which it allows you to be both close to the
machine and close to pure thought-stuff in the same language, sometimes
even in the same program.

As an example, if you have a bunch of things, and you want to apply
some function to all of the things, you can say:
(map function things)

That is exactly what you wanted to compute.  Nothing more, nothing
less.

On the other hand, Common Lisp is just about the only language that has
a disassembler required in the language spec, and if you use that
disassembler to check out the code that a properly tuned lisp program
puts out, it turns out to be very close to the assembler that a very
good C compiler puts out, only without the buffer overruns and memory
leaks. (We're working on an emulation package to keep the C guys
feeling at home. )

So Lisp gives you most of the advantages of high-level languages like
python and most of the advantages of low level languages like C in one
language, but it also gives you some things that other languages cannot
in any reasonable way.

If there were some expensive operation that returns a true/false
result, and you were only interested in the majority vote, you might
want to stop when you have already gotten a majority, the same way you
stop evaluating when you get to a false result in an AND, or a true one
in an OR.  The interface should look like this:

(majority TRUE TRUE TRUE  x y) => true, without actually evaluating x
or y.  If you frequently deal with a large set, most of which will turn
out either true or false, then having this in your toolbox could help a
lot.  Exercise for the reader: implement this in whatever language you
want, such that people can say

(majority (expensive-function-call) (long-running-query-to-db)
(get-user-input) and the user only gets asked for input if the first 2
arguments evaluate to different values.  Hint: it involves massive
pain, and requires the users of the function to do things a little (or
a lot) differently to make it work.

From: ·········@aol.com
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <1120049200.693910.174390@z14g2000cwz.googlegroups.com>
···········@gmail.com wrote:

<snip>

> No really.  Why lisp?
>
> A programming language is a lot of things, but the most central is a
> pipe between your messy, biological, analog head and certain kinds of
> discrete math, usually machine code.  Some programming languages, like
> C, assembler, and fortran, are very close to the actual details of some
> actual machine. Others, like Haskell, Python, perl, and so forth, are
> much farther away from any actual machine, but are closer to your head,
> and so are a lot easier to program in, because you can think more about
> your problem and less about the machine.  The lisp family of languages
> is unique in the degree to which it allows you to be both close to the
> machine and close to pure thought-stuff in the same language, sometimes
> even in the same program.

Fortran 95 and 2003 are higher level languages than C or assembler, and
are arguably higher-level than Python or Lisp in the area of scientific
computing. Of course, "higher-level" /= "better". Language comparisons
should be based on current versions of languages for which compilers or
interpreters exist.
From: Eric Lavigne
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <1120053409.128686.52520@g44g2000cwa.googlegroups.com>
>Fortran 95 and 2003 are higher level languages than C or assembler, and
>are arguably higher-level than Python or Lisp in the area of scientific
>computing.

I don't know anything about 2003 (except that it doesn't really exist
yet), but I use FORTRAN 95 all the time. I would say it is a lower
level language than C++ (regardless of which field you are working in)
and comes nowhere close to Lisp. I'd love to hear why you feel
differently. From what I have seen, any FORTRAN 95 program can be
translated line for line into Lisp (skipping some lines in the FORTRAN
program that are redundant but keep the FORTRAN 95 compiler happy).

Maybe you are thinking about library availability rather than the
language itself? If so, you might find this page interesting:

http://www.nhplace.com/kent/Papers/Fortran-to-Lisp.html
From: ·········@aol.com
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <1120082734.695209.113550@z14g2000cwz.googlegroups.com>
Eric Lavigne wrote:
> >Fortran 95 and 2003 are higher level languages than C or assembler, and
> >are arguably higher-level than Python or Lisp in the area of scientific
> >computing.
>
> I don't know anything about 2003 (except that it doesn't really exist
> yet), but I use FORTRAN 95 all the time. I would say it is a lower
> level language than C++ (regardless of which field you are working in)
> and comes nowhere close to Lisp.

In my earlier message I compared Fortran to C, not C++. I think the
functionality of Fortran's multidimensional arrays (elemental
functions, array slices) is more extensive than that of C++, whose
arrays have the same properties as those of C. C++ does have the
higher-level vector for 1-D arrays in the Standard Library, but I don't
see it used that much in numerical work. The C++ valarray seems to be
an orphan.

I don't know Lisp, but looking at the number of lines of code to
implement various algorithms in the Language Shootout at
http://shootout.alioth.debian.org/ , it seems that the ratio of lines
of code of Lisp to Fortran ranges from about 0.5 to 1.0 . By this
measure, "nowhere close to Lisp" may be an exagerration .

> I'd love to hear why you feel
> differently. From what I have seen, any FORTRAN 95 program can be
> translated line for line into Lisp (skipping some lines in the FORTRAN
> program that are redundant but keep the FORTRAN 95 compiler happy).

Since Fortran compilers produce pretty efficient code, in part because
companies such as Intel, IBM, and SUN (as well as independent compiler
vendors) have devoted considerable resources to Fortran compiler
performance tuning, there are a few projects to translate code from
functional languages to Fortran, for example

MathCode90 (Mathematica to Fortran 90)
http://www.wolfram.com/products/applications/mathcodef90/

Parametric Fortran (using Haskell)
http://web.engr.oregonstate.edu/~erwig/pf/

> Maybe you are thinking about library availability rather than the
> language itself? If so, you might find this page interesting:
>
> http://www.nhplace.com/kent/Papers/Fortran-to-Lisp.html

At that site, comparing the Fortran code

      SUBROUTINE MATMUL (X, Y, IDIM)
      REAL X(IDIM,1),Y(IDIM,1),Z(100)
      DO 100 I=1,IDIM
         Z(I)=0.
      DO 200 J=1,IDIM
  200    Z(I)=X(I,J)*Y(J,I)+Z(I)
      DO 100 J=1,IDIM
  100    X(I)=Z(I)
      RETURN
      END

to the Lisp

(FORTRAN (MATMUL X Y IDIM)
         (SUBROUTINE (REAL (X 1) (Y 1) (Z 100)) (INTEGER IDIM I J))
         (: I 1)
DO-I-100 (: (Z I) 0.0)
         (: J 1)
DO-J-200
/200     (: (Z I) (+$ (Z I) (*$ (X I J) (Y J I))))
         (COND ((<= (: J (1+ J)) IDIM) (GO DO-J-200)))
DO-J-100 (: (X I) (Z I))
         (COND ((<= (: J (1+ J)) IDIM) (GO DO-J-100)))
         (COND ((<= (: I (1+ I)) IDIM) (GO DO-I-100)))
         (RETURN T))

it's not obvious to me what the benefit of translation is, unless you
just prefer Lisp (as most readers here probably do). Fortran has had
proper loops since the 1990 standard so the numbered lines could be
replaced with ENDDO's (Yes, 1990 was too late. Many compilers had it as
an extension years earlier.). Furthermore, Fortran 90 added an
intrinsic function MATMUL for matrix multiplication.
From: Kent M Pitman
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <uk6kc240r.fsf@nhplace.com>
·········@aol.com writes:

> Eric Lavigne wrote:
> ...
> > Maybe you are thinking about library availability rather than the
> > language itself? If so, you might find this page interesting:
> >
> > http://www.nhplace.com/kent/Papers/Fortran-to-Lisp.html
> 
> At that site, comparing the Fortran code
> 
>       SUBROUTINE MATMUL (X, Y, IDIM)
>       REAL X(IDIM,1),Y(IDIM,1),Z(100)
>       DO 100 I=1,IDIM
>          Z(I)=0.
>       DO 200 J=1,IDIM
>   200    Z(I)=X(I,J)*Y(J,I)+Z(I)
>       DO 100 J=1,IDIM
>   100    X(I)=Z(I)
>       RETURN
>       END
> 
> to the Lisp
> 
> (FORTRAN (MATMUL X Y IDIM)
>          (SUBROUTINE (REAL (X 1) (Y 1) (Z 100)) (INTEGER IDIM I J))
>          (: I 1)
> DO-I-100 (: (Z I) 0.0)
>          (: J 1)
> DO-J-200
> /200     (: (Z I) (+$ (Z I) (*$ (X I J) (Y J I))))
>          (COND ((<= (: J (1+ J)) IDIM) (GO DO-J-200)))
> DO-J-100 (: (X I) (Z I))
>          (COND ((<= (: J (1+ J)) IDIM) (GO DO-J-100)))
>          (COND ((<= (: I (1+ I)) IDIM) (GO DO-I-100)))
>          (RETURN T))
> 
> it's not obvious to me what the benefit of translation is, unless you
> just prefer Lisp (as most readers here probably do). Fortran has had
> proper loops since the 1990 standard so the numbered lines could be
> replaced with ENDDO's (Yes, 1990 was too late. Many compilers had it as
> an extension years earlier.). Furthermore, Fortran 90 added an
> intrinsic function MATMUL for matrix multiplication.

That paper was written in 1978 and should be compared in context.

Also, the point of the translation results was to show that there were
two steps that were separately managed by my translator: syntactic and
semantic.  The pseudocode you quote above was something that could be
edited to include Lisp operations and variables.  The translator would
apply the appropriate kinds of calls (Lisp or Fortran) to the various
functions.  Lisp variables were call-by-value, Fortran variables were 
call-by-reference.  That part of the translation was not done by the
translator per se, but by a set of macros that the translator expected
you to use to evaluate the output of its translation and a set of 
subroutines that managed the Fortran virtual machine space, doing the
Fortran equivalent of automatic storage management, etc.

The issue of speed was orthogonal to that.  In fact, the translator lost
speed that native coding would not lose.  On the other hand, it was 
sufficiently in range that it wasn't grossly slow.  And it put into the
hands of MACSYMA users (who called out to Lisp a lot) the entire IMSL
Fortran library, which was a powerful set of tools whether they were
running 100% efficiently or not.

Automatic translation is never going to be a good comparison of two
languages speedwise, and part of the point of the paper is to identify
some of the ways in which the translation made it hard to do certain
optimizations because Lisp was not optimizable in certain cases
because it worried your use of that memory was more general-purpose
than it turned out to be, and that fear of general-purpose access inhibited
certain important optimizations.

JonL White (I think it was him) wrote a paper in the same timeframe showing
how natively coded Lisp code doing numerical stuff outperformed DEC Fortran.
But benchmarks are always transitory and saying one language is better than
another is really not as interesting as saying that the languages "care" and
"try to be competitive".
From: David Golden
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <6oGwe.1878$R5.477@news.indigo.ie>
·········@aol.com wrote:


> In my earlier message I compared Fortran to C, not C++. I think the
> functionality of Fortran's multidimensional arrays (elemental
> functions, array slices) is more extensive than that of C++, whose
> arrays have the same properties as those of C. 

People masochistic enough to work by choice in C++ would usually use
the Boost or Blitz++ multidim arrays, which both make extensive
use of templates and operator overloading to embed  multidim-array
little languages into C++:

Boost:
http://www.boost.org/libs/multi_array/doc/user.html

Blitz++:
http://www.oonumerics.org/blitz/manual/blitz02.html#l30

From Blitz++ example:

#include <blitz/array.h>

using namespace blitz;

int main()
{
    Array<float,2> A(3,3), B(3,3), C(3,3);

    A = 1, 0, 0,
        2, 2, 2,
        1, 0, 0;

    B = 0, 0, 7,
        0, 8, 0,
        9, 9, 9;

    C = A + B;
    cout << "A = " << A << endl
         << "B = " << B << endl
         << "C = " << C << endl;

    return 0;
}

As you can see, a bit of a step up from C arrays. I know many C++
programmers don't consider C++ a complete language without Boost.

Fortran 90 or above is still a nicer choice IMHO, really, it makes for 
clearer code (to the countless scientists and engineers who know and
"love" fortran) and still integrates better with  the vast corpus of
fortran numerics libraries built up over the past decades, but anyway.
From: Matthias Buelow
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <8664vxthgo.fsf@drjekyll.mkbuelow.net>
···········@gmail.com writes:

>your problem and less about the machine.  The lisp family of languages
>is unique in the degree to which it allows you to be both close to the
>machine and close to pure thought-stuff in the same language, sometimes
>even in the same program.

Please explain to me in what way Common Lisp is "close to the machine".

mkb.
From: ···········@gmail.com
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <1120070897.908121.240140@o13g2000cwo.googlegroups.com>
2 ways, really.  One is that a good lisp implementation produces code
with approximately the same effeciency characteristics as a that
produced by a good C compiler, when given properly annotated lisp
source.  This can be contrasted with, say, perl, or ruby, or python,
which do not, in my experience, produce very effecient code.

The second is that there are a number of lisp operations that
correspond directly to fairly simple machine instructions.  As I
understand it (cdr x) grabs whatever is stored at (the address stored
in x) - c, where c is some constant that I cannot recall (5, maybe?) on
allegro CL.  It is pretty hard to get closer to the machine than that.
Scheme has the same thing-a tail call compiles down to a goto.   I do
not think that scheme and lisp are as close to the machine as they
could be, but they are both excellent in this regard.  I also mention
the common lisp disassembler in the paragraph below the one you comment
on.

I am not trying to optimize for rigor here, but for rhetoric.  That
said, I think the things I said were defensible anyway, so I am glad
you asked.
From: Matthias Buelow
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <86mzp9rmde.fsf@drjekyll.mkbuelow.net>
···········@gmail.com writes:

>2 ways, really.  One is that a good lisp implementation produces code
>with approximately the same effeciency characteristics as a that
>produced by a good C compiler, when given properly annotated lisp
>source.  This can be contrasted with, say, perl, or ruby, or python,
>which do not, in my experience, produce very effecient code.

Well. There's little preventing someone from writing a compiler that
outputs machine code for one of those languages aswell, imho.

>The second is that there are a number of lisp operations that
>correspond directly to fairly simple machine instructions.  As I
>understand it (cdr x) grabs whatever is stored at (the address stored
>in x) - c, where c is some constant that I cannot recall (5, maybe?) on

So, you mentioned Haskell as being "much farther away from any actual
machine". What is different with Haskell's "tail" that makes Lisp's
"cdr" so much nearer to the machine? (You give that as an example, not
I). I'd actually say that the "tail" is "nearer to the machine" simply
because the Haskell compiler doesn't have to deal with type checks at
runtime (which cdr normally has to) and as such can emit tighter
machine code.

>Scheme has the same thing-a tail call compiles down to a goto.   I do

Tail call optimizations are provided in most functional language
compilers.

>not think that scheme and lisp are as close to the machine as they
>could be, but they are both excellent in this regard.  I also mention

I'd say that Common Lisp (as standardized) aswell as Scheme (in R5RS)
are actually rather far from the machine. A real implementation might
of course extend the standard and provide mechanisms for dealing with
machine objects (such as word-sized integers, bytes etc.) but apart
from that, CL (and Scheme) are just as abstract as Haskell, ML, and
what have you. Not that I consider that abstraction level as bad, of
course.

mkb.
From: Alex Mizrahi
Subject: Re: Request for comment: follow-up to Summer of Code
Date: 
Message-ID: <42cc3eb2$0$18637$14726298@news.sunsite.dk>
(message (Hello 'Matthias)
(you :wrote  :on '(Wed, 29 Jun 2005 21:31:09 +0200))
(

 ??>> 2 ways, really.  One is that a good lisp implementation produces code
 ??>> with approximately the same effeciency characteristics as a that
 ??>> produced by a good C compiler, when given properly annotated lisp
 ??>> source.  This can be contrasted with, say, perl, or ruby, or python,
 ??>> which do not, in my experience, produce very effecient code.

 MB> Well. There's little preventing someone from writing a compiler that
 MB> outputs machine code for one of those languages aswell, imho.

there are languages where it's impossible by design to create efficient
compiler, at all. kinda they are too dynamic..
i think there were enourmous efforts to make Python efficient -- it is
widely used, but results are poor -- it's simply impossible.

)
(With-best-regards '(Alex Mizrahi) :aka 'killer_storm)
"People who lust for the Feel of keys on their fingertips (c) Inity")