From: ············@gmail.com
Subject: Redoing Matlisp?
Date: 
Message-ID: <1158642310.999720.254940@m73g2000cwd.googlegroups.com>
Greetings!

I'm thinking of reworking / redoing Matlisp to support a whole
framework for optimized linear algebra that exploits the BLAS and
LAPACK.  Here are some features that I'd like to see:

1. Matrix views (like in the GSL) -- handy for the
recursively-structured approach to linear algebra that's popular
nowadays, and also handy for my research.  I want strided matrix views
too (GSL only has "window" matrix views which are like subblocks).

2. In-place operations -- Matlab doesn't support the in-place
factorizations that I want to do in order to save memory.

3. Different (implicit) representations of things like matrix
factorizations -- Matlab doesn't give me access to things like the
Householder vector representation of a Q factor in a QR factorization,
which I want because it's a compact representation of the _entire_ Q
factor for a tall and skinny matrix.

4. Using macros and series operations to enable efficient
iteration-over of things like strided matrix views (the scanner does
the indexing so you don't have to pay for index overhead every time you
call a matrix-aref kind of function).

5. Exploiting CLOS to get automatic and efficient handling of matrices
of different data types.

6. Potential for parallelization (e.g. interface to ScaLAPACK or my own
parallel routines, or even an MPI interface).  Matlab is going that way
too (There's Matlab*P, plus Cleve has been giving talks about the
MathWorks' parallel Matlab (that crashed during one of his demos -- heh
;p).

7. Can transform Matlab code into a native representation.  (Matlab
parser returns interpretable native code, plus you can feed it into
Lisp's COMPILE and get compiled Matlab for free!)

The perhaps overly ambitious goal is to make something almost as easy
to use as Matlab that is much more efficient.  We want interactive
scientific computing that presents a programming model which avoids
sacrificing efficiency, especially for very large problems.  The
shorter-term goal is to make a better Matlisp.

I've got a goodly amount of code sketched out with #1 and #4, and I've
written up a number of ideas.  If you are interested in participating
in the project, please let me know.  I have connections with folks who
write optimized numerical software (LAPACK, OSKI for sparse matrices)
so we have potential for incorporating a lot of cool stuff.  I could
use some experienced Lisp developers as I've only been into Lisp
seriously for a year and a half or so (though I've been coding since I
was yea high *holds hand not too far above the floor* ;) ).

Please contact me directly if you are interested.  We can set up a
separate mailing list and a code repository.

Best,
Mark Hoemmen
http://www.cs.berkeley.edu/~mhoemmen/

From: ·············@specastro.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1158706396.691286.79140@e3g2000cwe.googlegroups.com>
············@gmail.com wrote:
> Greetings!
>
> I'm thinking of reworking / redoing Matlisp to support a whole
> framework for optimized linear algebra that exploits the BLAS and
> LAPACK.  Here are some features that I'd like to see:

You might find these two hard-to-find papers interesting:

"Object-oriented design in numerical linear algebra"
http://jamcdonald0.home.att.net/cactus-tr.pdf

and

"Object-oriented programming for linear algebra"
http://jamcdonald1.home.att.net/papers/oopsla89.pdf

McDonald's other publications might have some relevance as well.
http://home.att.net/~jamcdonald/publications.html

The two referenced papers above take a more linear algebraic approach,
rather than just providing a matrix API.

Glenn
From: Juanjo
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1158741885.003161.316210@i42g2000cwa.googlegroups.com>
············@gmail.com schrieb:
> I'm thinking of reworking / redoing Matlisp to support a whole
> framework for optimized linear algebra that exploits the BLAS and
> LAPACK.  Here are some features that I'd like to see:
>
> 1. Matrix views (like in the GSL) -- handy for the
> recursively-structured approach to linear algebra that's popular
> nowadays, and also handy for my research.  I want strided matrix views
> too (GSL only has "window" matrix views which are like subblocks).

Forget about matrices. Use tensors, i.e. n-dimensional arrays.
Common-Lisp already has them, so use them. Create a function (I call it
FOLD) to contract arbitrary indices of two tensors. Implement matrix
multiplication on top of that. I have done it, using BLAS/LAPACK
cleverly (it detects when it can use multiplication by transpose) and
it is just as fast as Matlab's routines, and allows me to treat 3D
meshes and even 6D problems.

Once you have tensors, strided views are also easy and can be done for
any number of indices. I have done all this in C++ and there you miss
closures: if you want to loop over strided views, either you have to
expose all the inner guts of strides using templates, or you have to
pass a pointer to a function. In Common-Lisp you could benefit from
macros.

> 7. Can transform Matlab code into a native representation.  (Matlab
> parser returns interpretable native code, plus you can feed it into
> Lisp's COMPILE and get compiled Matlab for free!)

That would be nice, and probably easy. The most complicated part is
probably plotting and all the associated libraries that come with
Matlab: optimization, interface to Maple, etc. Those are the only
reason why I end up using Matlab.

Juanjo
From: ············@gmail.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1158766568.958465.197000@i3g2000cwc.googlegroups.com>
Juanjo wrote:
> > 7. Can transform Matlab code into a native representation.  (Matlab
> > parser returns interpretable native code, plus you can feed it into
> > Lisp's COMPILE and get compiled Matlab for free!)
>
> That would be nice, and probably easy. The most complicated part is
> probably plotting and all the associated libraries that come with
> Matlab: optimization, interface to Maple, etc. Those are the only
> reason why I end up using Matlab.

Oh, I don't want to step on Cleve Moler's toes ;p  The GUI / plotting /
toolbox stuff is why people pay for Matlab licenses; otherwise Octave
or Scilab or whatever would be good enough.  I'd be happy just making
the computations really fast and the programming model more sensible,
though if someone else wants to add the pretty pictures and tools, I'm
all for it :)

About the tensors -- I like the idea, and I'd love to see what your
expertise could contribute towards such a system!  I'm just a little
concerned that people who are used to thinking of problems in terms of
matrices and vectors may find the learning curve too steep for a
general tensor system.  Would there be much sacrifice in matrix /
vector performance if a matrix / vector system were implemented on top
of a general tensor system?  In particular, I'm hoping that the
equivalent of AREF doesn't involve a branch in fully optimized,
compiled code. 

Best,
mfh
From: Juanjo
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1158778197.715746.206070@d34g2000cwd.googlegroups.com>
············@gmail.com schrieb:
> About the tensors -- I like the idea, and I'd love to see what your
> expertise could contribute towards such a system!  I'm just a little
> concerned that people who are used to thinking of problems in terms of
> matrices and vectors may find the learning curve too steep for a
> general tensor system.  Would there be much sacrifice in matrix /
> vector performance if a matrix / vector system were implemented on top
> of a general tensor system?

There is no performance penalty at all. People using matrices and
vectors can have the same interface, that is lisp arrays with two and
one dimension, respectively.

> In particular, I'm hoping that the
> equivalent of AREF doesn't involve a branch in fully optimized,
> compiled code.

Currently matlisp is implemented using CLOS. What can be slower than
that? If you rather base your code on lisp arrays and declare their
dimensions or at least their rank then AREF can be inlined by the
compiler.

In other operations, like tensor/matrix/vector traversal, you would use
macros that expand into a loop and calls to ROW-MAJOR-AREF with some
precomputed index. That can be extremely fast for some compilers as
CMUCL.

Regarding matrix/vector operations, a matrix is a 2D array, a vector a
1D array. If you have a function like I do with signature

(defun fold (tensor-a index-a tensor-b index-b)
   ;; index-a/b are the indices to be contracted. For an array with
   ;; D dimensions, they range from 0 to D-1, and -1..-D
   ;; with 0 == D, -1 == D-1, etc.
...)

then you can write simply

(defun m* (matrix/vector-a matrix/vector-b) (fold matrix/vector-a -1
matrix-vector-b -1))

which can be inlined. Even more, you can write a compiler macro which
inspects the arguments to FOLD and translates it to a single call to
BLAS/ATLAS when possible.

People who like matrices and vectors, can stay with them. On the other
hand you have a uniform and extensible way that allows us dealing with
multidimensional tensors, like Matlab does, or even better (FOLD does
not exist in Matlab, I had to code it myself:
http://www.mpq.mpg.de/Theorygroup/CIRAC/people/ripoll/fold.c)

I only see two possibles reason for not using lisp arrays: 1) when
there are no specialized types for (complex double-float) and (complex
single-float), 2) when it is not possible to pass the addresses of
these arrays to FFI.

Juanjo
From: ············@gmail.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1158821420.913924.243350@m73g2000cwd.googlegroups.com>
Juanjo wrote:
> Currently matlisp is implemented using CLOS. What can be slower than
> that? If you rather base your code on lisp arrays and declare their
> dimensions or at least their rank then AREF can be inlined by the
> compiler.

I'm not questioning the accusation that CLOS is slow but I'm surprised
that CLOS couldn't do at least some of its dispatching at compile time.
 Would you happen to have a reference that explains why CLOS is so
slow?  (I'm not doubting you -- I'd just like to learn why.)  Google
doesn't suggest any immediate answers other than positive propaganda
from Franz Inc. ;)

What I'm thinking is that there will be a medium-level "linear algebra
assembly language" that's easier to use than straight Fortran and
bloody fast.  Then we can layer a Matlab-like language on top of that
and use code transformations to make it as fast as possible, but some
performance will be sacrificed by choosing to use the highest-level
programming model.  So the medium-level language can be optimized, not
use CLOS whenever possible, etc., but the higher-level language can use
CLOS or whatever it needs to make things easy to use.

A good example is if you consider the different matrix storage formats
used by LAPACK.  Someone who knows the LAPACK interface realizes that
e.g. the Q factor in the QR decomposition is represented implicitly.
The medium-level interface will expose this.  However, the higher-level
interface will try to hide this from people who don't want to think
about the implicit representation.  A lot of optimization effort will
go into making the higher-level interface as fast as possible given the
constraints of assuming LAPACK-ignorant users ;)

> In other operations, like tensor/matrix/vector traversal, you would use
> macros that expand into a loop and calls to ROW-MAJOR-AREF with some
> precomputed index. That can be extremely fast for some compilers as
> CMUCL.

True, ROW-MAJOR-AREF is basically just a pointer dereference, almost no
indexing at all, if the compiler is smart.

> which can be inlined. Even more, you can write a compiler macro which
> inspects the arguments to FOLD and translates it to a single call to
> BLAS/ATLAS when possible.

Now _that's_ what I'm looking for :)

> I only see two possibles reason for not using lisp arrays: 1) when
> there are no specialized types for (complex double-float) and (complex
> single-float), 2) when it is not possible to pass the addresses of
> these arrays to FFI.

Some other folks who saw my post explained how to get a pointer to a
Lisp array with SBCL and CMUCL (CFFI and UFFI when run under CMUCL or
SBCL don't know how to pass a Lisp array into a C function that takes a
C array), so in all the Lisp implementations I can think of that CFFI
supports, I don't think #2 matters ;p

#1 is a somewhat more serious issue -- I think we have to look more
closely at different Lisp implementations to see how they handle those
types.  Thank you for pointing that out -- I hadn't realized that would
be a problem.

Best,
mfh
From: Pascal Bourguignon
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <87k63x75nb.fsf@thalassa.informatimago.com>
·············@gmail.com" <············@gmail.com> writes:

> Juanjo wrote:
>> Currently matlisp is implemented using CLOS. What can be slower than
>> that? If you rather base your code on lisp arrays and declare their
>> dimensions or at least their rank then AREF can be inlined by the
>> compiler.
>
> I'm not questioning the accusation that CLOS is slow but I'm surprised
> that CLOS couldn't do at least some of its dispatching at compile time.

(defclass root ()
   ())

(defmethod do-something ((self root))
  (print '(doing something for a root object)))

(defparameter *o* (make-instance 'root))
(progn 
   (print '(Hey user! Do you have anything to say?))
   (read)
   (do-something *o*))


Now, imagine the user types:

#.(progn (defclass sub (root) ()) 
         (defmethod do-something ((self sub)) 
            (print '(doing something for a sub object)))
         (change-class *o* 'sub))

What will happen?


>  Would you happen to have a reference that explains why CLOS is so
> slow?  (I'm not doubting you -- I'd just like to learn why.) 

It's not slow. (Just try to do the same in C++ !)


Now, nothing prevents you to implement another object system in Common
Lisp.  There are other object systems, for example, KR (used in
garnet).  You could implement an object system that is statically
dispatched like C++'s one.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

PLEASE NOTE: Some quantum physics theories suggest that when the
consumer is not directly observing this product, it may cease to
exist or will exist only in a vague and undetermined state.
From: ············@gmail.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1159171668.084481.82340@d34g2000cwd.googlegroups.com>
Pascal Bourguignon wrote:
> (defclass root ()
>    ())
>
> (defmethod do-something ((self root))
>   (print '(doing something for a root object)))
>
> (defparameter *o* (make-instance 'root))
> (progn
>    (print '(Hey user! Do you have anything to say?))
>    (read)
>    (do-something *o*))
>
>
> Now, imagine the user types:
>
> #.(progn (defclass sub (root) ())
>          (defmethod do-something ((self sub))
>             (print '(doing something for a sub object)))
>          (change-class *o* 'sub))
>
> What will happen?

Ah, that makes sense now, thank you :)

> Now, nothing prevents you to implement another object system in Common
> Lisp.  There are other object systems, for example, KR (used in
> garnet).  You could implement an object system that is statically
> dispatched like C++'s one.

True.  For small fixed-size matrices, it may be useful to do something
like C++ templates -- automatic generation at compile-time of the
appropriate objects.

mfh
From: Nicolas Neuss
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <8764fhgvrs.fsf@ortler.iwr.uni-heidelberg.de>
·············@gmail.com" <············@gmail.com> writes:

> Juanjo wrote:
>> Currently matlisp is implemented using CLOS. What can be slower than
>> that? If you rather base your code on lisp arrays and declare their
>> dimensions or at least their rank then AREF can be inlined by the
>> compiler.
>
> I'm not questioning the accusation that CLOS is slow but I'm surprised
> that CLOS couldn't do at least some of its dispatching at compile time.
>  Would you happen to have a reference that explains why CLOS is so
> slow?  (I'm not doubting you -- I'd just like to learn why.)  Google
> doesn't suggest any immediate answers other than positive propaganda
> from Franz Inc. ;)


Note that only the dispatch is slow.  For large matrix-matrix
multiplication or inversion, Matlisp can use ATLAS routines which are
significantly faster than simple-minded routines coded in either Lisp or
C++.

> What I'm thinking is that there will be a medium-level "linear algebra
> assembly language" that's easier to use than straight Fortran and
> bloody fast.  Then we can layer a Matlab-like language on top of that
> and use code transformations to make it as fast as possible, but some
> performance will be sacrificed by choosing to use the highest-level
> programming model.  So the medium-level language can be optimized, not
> use CLOS whenever possible, etc., but the higher-level language can use
> CLOS or whatever it needs to make things easy to use.

Why shouldn't this high-level be more or less like Matlisp? 

With respect to the question of using Lisp arrays: one disadvantage is that
you cannot dispatch on array element type.  Wrapping a specialized array
into a CLOS class you can dispatch on the type of the matrix/tensor and use
specialized fast code (or call a specialized BLAS routine).  I have
implemented this approach similar to Matlisp inside my PDE solving code
"Femlisp" (look in the directory #p"femlisp;src;matlisp").

Nicolas
From: rif
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <wj0hcz1l2nx.fsf@five-percent-nation.mit.edu>
Nicolas Neuss <··················@iwr.uni-heidelberg.de> writes:

> ·············@gmail.com" <············@gmail.com> writes:
> 
> > Juanjo wrote:
> >> Currently matlisp is implemented using CLOS. What can be slower than
> >> that? If you rather base your code on lisp arrays and declare their
> >> dimensions or at least their rank then AREF can be inlined by the
> >> compiler.
> >
> > I'm not questioning the accusation that CLOS is slow but I'm surprised
> > that CLOS couldn't do at least some of its dispatching at compile time.
> >  Would you happen to have a reference that explains why CLOS is so
> > slow?  (I'm not doubting you -- I'd just like to learn why.)  Google
> > doesn't suggest any immediate answers other than positive propaganda
> > from Franz Inc. ;)
> 
> 
> Note that only the dispatch is slow.  For large matrix-matrix
> multiplication or inversion, Matlisp can use ATLAS routines which are
> significantly faster than simple-minded routines coded in either Lisp or
> C++.
> 

Right, exactly.  The only thing that's important is to make sure that
you're not doing a CLOS dispatch on every access.  For instance, if I
do a matrix-matrix multiply in LAPACK, but then I want to look at all
the entries for some reason, I don't want to do n^2 CLOS dispatches.

Gosh, I really have to release my code soon...

rif
From: Nicolas Neuss
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <871wq5gv2l.fsf@ortler.iwr.uni-heidelberg.de>
Nicolas Neuss <··················@iwr.uni-heidelberg.de> writes:

> Why shouldn't this high-level be more or less like Matlisp? 
>
> With respect to the question of using Lisp arrays: one disadvantage is that
> you cannot dispatch on array element type.  Wrapping a specialized array
> into a CLOS class you can dispatch on the type of the matrix/tensor and use
> specialized fast code (or call a specialized BLAS routine).  I have
> implemented this approach similar to Matlisp inside my PDE solving code
> "Femlisp" (look in the directory #p"femlisp;src;matlisp").

To be clear: the interface is similar to Matlisp, but not all of Matlisp's
functionality is available (although you can easily extend Matlisp generic
functions for my classes).  Internally, my approach works very differently:
methods for specialized matrix classes are generated (CL source) and
compiled at runtime, if they are not yet available.

Nicolas
From: ············@gmail.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1159172560.485349.161710@d34g2000cwd.googlegroups.com>
Nicolas Neuss wrote:
> Note that only the dispatch is slow.  For large matrix-matrix
> multiplication or inversion, Matlisp can use ATLAS routines which are
> significantly faster than simple-minded routines coded in either Lisp or
> C++.

True, or a vendor-optimized BLAS, or even K. Goto's super-duper
optimized BLAS which is faster than everybody else's ;p

> Why shouldn't this high-level be more or less like Matlisp?

Matlisp presents (in a sense) two different interfaces:

1. In-place (destructive) operations such as GEMM!, m+!
2. Copy-semantics operations such as GEMM, m+

In a similar way, we want to provide both interfaces, though we want to
provide certain optimizations to make the copy-semantics operations as
efficient as possible.  We say "copy-semantics" because it may not
always be necessary to copy data, as long as the interface promises
that it will operate as if the data were copied.  This is a typical
optimization for "pure functional" code that works with large objects.

The "higher-level" interface can be like Matlisp or whatever people
want.  If they want a Matlab clone then all we need to do is write an
infix parser or a REPL.

> With respect to the question of using Lisp arrays: one disadvantage is that
> you cannot dispatch on array element type.  Wrapping a specialized array
> into a CLOS class you can dispatch on the type of the matrix/tensor and use
> specialized fast code (or call a specialized BLAS routine).  I have
> implemented this approach similar to Matlisp inside my PDE solving code
> "Femlisp" (look in the directory #p"femlisp;src;matlisp").

This is fast for large arrays but for small fixed-size arrays it may
not be fast.  My work mostly involves large arrays but some other
people may want specialized routines for small fixed-size arrays.

mfh
From: ············@gmail.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1158766851.063474.299700@m73g2000cwd.googlegroups.com>
Juanjo wrote:
> > 7. Can transform Matlab code into a native representation.  (Matlab
> > parser returns interpretable native code, plus you can feed it into
> > Lisp's COMPILE and get compiled Matlab for free!)
>
> That would be nice, and probably easy. The most complicated part is
> probably plotting and all the associated libraries that come with
> Matlab: optimization, interface to Maple, etc. Those are the only
> reason why I end up using Matlab.

Oh, I don't want to step on Cleve Moler's toes ;p  The GUI / plotting /
toolbox stuff is why people pay for Matlab licenses; otherwise Octave
or Scilab or whatever would be good enough.  I'd be happy just making
the computations really fast and the programming model more sensible,
though if someone else wants to add the pretty pictures and tools, I'm
all for it :)

About the tensors -- I like the idea, and I'd love to see what your
expertise could contribute towards such a system!  I'm just a little
concerned that people who are used to thinking of problems in terms of
matrices and vectors may find the learning curve too steep for a
general tensor system.  Would there be much sacrifice in matrix /
vector performance if a matrix / vector system were implemented on top
of a general tensor system?  In particular, I'm hoping that the
equivalent of AREF doesn't involve a branch in fully optimized,
compiled code. 

Best,
mfh
From: ······@gmail.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1158882591.627307.70800@i42g2000cwa.googlegroups.com>
Juanjo wrote:
> That would be nice, and probably easy. The most complicated part is
> probably plotting and all the associated libraries that come with
> Matlab: optimization, interface to Maple, etc. Those are the only
> reason why I end up using Matlab.

While we are at this (and best luck to Mark in this project!), I am
wondering if there is a Lisp-centric effort to redo (or translate from
Python) matplotlib (http://matplotlib.sourceforge.net/)? Of course
handling 3D plots would be a bonus! ;-)

In the meantime, some of gnuplot interfaces seem to work though...

Paul B.
From: ············@gmail.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1159172791.658175.3670@m7g2000cwm.googlegroups.com>
Many of you have given a lot of good advice over both e-mail and
comp.lang.lisp -- I've collected some of that good advice and put
together a web page that presents some existing work and discusses some
of the issues.  Here is the page:

http://www.cs.berkeley.edu/~mhoemmen/matlisp/

Note that this is a work in progress -- in particular, if you think
I've neglected to mention something important or made a mistake, please
let me know.  I also haven't mentioned all your names -- please
consider yourselves thanked, all of you :)

Best,
mfh
From: HL
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <863bag3zz9.fsf@agora.my.domain>
·············@gmail.com" <············@gmail.com> writes:

> Greetings!
> 
> I'm thinking of reworking / redoing Matlisp to support a whole
> framework for optimized linear algebra that exploits the BLAS and
> LAPACK.  Here are some features that I'd like to see:

Hi Mark --

 Let me throw a few pointers at you that might be of interest (I hope
 it's ok that some stuff is not Common Lisp - but "related" in
 some way):

  
1) Schelab, a numerical analysis library for Scheme

http://www.arrakis.es/~worm/schelab.html

 This is by Juan Jose Garcia Ripoll, "the ECL guy".

2) Software for Numerical Methods for Partial Differential Equations

 This stuff is for Gambit-C Scheme. Gambit-C generates C code, and
 this code has the same performance as...code written in C. :-)) So they
 claim in their benchmarks. From Perdue University. Object-oriented
 (Meroon object system):

http://www.math.purdue.edu/~lucier/615-2000/software/

 
3) As I mentioned in another post, the "Numerical Recipes guys" sell a
   CD-ROM that includes Common Lisp source code:

http://www.numerical-recipes.com/cdrom-blurb.html

4) Again, in the same vicinity, Doug Williams has ported all his
   code previously for Symbolics to DrScheme (which is very nice),
   and what you have is a mature codebase in two packages: PLT Scheme
   Science Collection and PLT Scheme Simulation Collection:

 http://drschemer.blogspot.com/2006/05/plt-scheme-projects.html
 http://drschemer.blogspot.com/2006/05/plt-scheme-science-collection.html
 
 Bill Clementson has blogged about this:

 http://bc.tech.coop/blog/060201.html

5) SML Matrix library (SML, not lisp!):
 
 http://www.cs.cmu.edu/afs/cs/project/pscico/pscico/src/matrix/README.html
 
 It's from the  PSciCo (Parallel Scientific Computing) project:
 http://www.cs.cmu.edu/~pscico/

6) Another thing in the vicinity: there's a book out called a
   Numerical Library in Java for Scientists and Engineers, by  Hang
   T. Lau (ISBN: 1584884304) that has source code for the NUMAL
   library developed in the 70s by the Mathematisch Centrum at
   Amsterdam. Apparently, this code is part of the Numerical Recipes
   codebase. AFAIK, NUMAL was in the public domain (those were the
   days...). It may be something worth looking at and using/looking at.

 HTH.
 HL
From: Juanjo
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1159193619.202015.281580@d34g2000cwd.googlegroups.com>
HL schrieb:
> 1) Schelab, a numerical analysis library for Scheme
>
> http://www.arrakis.es/~worm/schelab.html
>
>  This is by Juan Jose Garcia Ripoll, "the ECL guy".

I am afraid I have lost this code, which was done long long time ago.

Juanjo
From: ············@gmail.com
Subject: Re: Redoing Matlisp?
Date: 
Message-ID: <1159240186.558482.89770@b28g2000cwb.googlegroups.com>
Thanks for the references!

HL wrote:
> 2) Software for Numerical Methods for Partial Differential Equations
>
>  This stuff is for Gambit-C Scheme. Gambit-C generates C code, and
>  this code has the same performance as...code written in C. :-)) So they
>  claim in their benchmarks. From Perdue University. Object-oriented
>  (Meroon object system):
>
> http://www.math.purdue.edu/~lucier/615-2000/software/

Sparse matrix codes, especially for solving PDEs, tend to be highly
reponsive to tuning based on matrix structure.  If I include a sparse
matrix package I'll probably use OSKI (bebop.cs.berkeley.edu/oski)
which does a lot of that.  It's way too much work for me to replicate
that in Lisp (though it would be interesting to do a performance
comparison of the C and Lisp compilers...).

That brings up an interesting question -- how do I convince Lisp to
call a particular function (to release resources allocated by the C
library) when it shuts down?  OSKI may do things in the background like
open files, play with local databases, etc., and I want to make sure
that buffers are drained and files closed when Lisp exists.

> 4) Again, in the same vicinity, Doug Williams has ported all his
>    code previously for Symbolics to DrScheme (which is very nice),
>    and what you have is a mature codebase in two packages: PLT Scheme
>    Science Collection and PLT Scheme Simulation Collection:
>
>  http://drschemer.blogspot.com/2006/05/plt-scheme-projects.html
>  http://drschemer.blogspot.com/2006/05/plt-scheme-science-collection.html

I remember when I was working a few years ago, how much I wanted a
general simulation package.  I'll definitely take a look at that and
see how Doug W. handled the linear algebra.

Many thanks!
mfh