From: Tony
Subject: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <18dae34d.4d4aa386@usw-ex0110-076.remarq.com>
I use lisp largley to write code for evloving Neural
Networks which can be a very time consuming task. I'm
currently running Alegro LISP in a Win 95 environment, and
my programs take days (sometimes weeks) to run. Is there any
way to speed things up a bit?

Oh, and does anyone know whether Alegro LISP would work
under Linux or would I need a different version?


* Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful

From: Marco Antoniotti
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <lwaem37fe5.fsf@parades.rm.cnr.it>
Tony <············@chice.force9.co.uk.invalid> writes:

> I use lisp largley to write code for evloving Neural
> Networks which can be a very time consuming task. I'm
> currently running Alegro LISP in a Win 95 environment, and
> my programs take days (sometimes weeks) to run. Is there any
> way to speed things up a bit?

Maybe by using better data structures and algorithms? :}

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Arvid Gr�tting
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <yg11z7fh7zc.fsf@pc-arvidg.infotek.no>
* Marco Antoniotti
> Tony <············@chice.force9.co.uk.invalid> writes:
> 
> > I use lisp largley to write code for evloving Neural
> > Networks which can be a very time consuming task. I'm
> > currently running Alegro LISP in a Win 95 environment, and
> > my programs take days (sometimes weeks) to run. Is there any
> > way to speed things up a bit?
> 
> Maybe by using better data structures and algorithms? :}

:-)

One problem with neural networks is that they're inherenly both
massively parallel and quite analog.  Doing them efficiently on a
mostly-sequential, definitely-digital computer can be a pain, even
where possible.

Then again, analog VLSI, while efficient for these computations, has
quite a long develop-compile-test cycle :-)

-- 
Arvid

colonel Uzi Semtex security ammunition arrangements Waco, Texas CIA
From: Marco Antoniotti
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <lw4scb7b4h.fsf@parades.rm.cnr.it>
Arvid Gr�tting <······@pc-arvidg.infotek.no> writes:

> * Marco Antoniotti
> > Tony <············@chice.force9.co.uk.invalid> writes:
> > 
> > > I use lisp largley to write code for evloving Neural
> > > Networks which can be a very time consuming task. I'm
> > > currently running Alegro LISP in a Win 95 environment, and
> > > my programs take days (sometimes weeks) to run. Is there any
> > > way to speed things up a bit?
> > 
> > Maybe by using better data structures and algorithms? :}
> 
> :-)
> 
> One problem with neural networks is that they're inherenly both
> massively parallel and quite analog.  Doing them efficiently on a
> mostly-sequential, definitely-digital computer can be a pain, even
> where possible.
> 
> Then again, analog VLSI, while efficient for these computations, has
> quite a long develop-compile-test cycle :-)

Of course.  But then again, what are the main data strutcures you are
using?

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Tony
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <038c8397.85128a48@usw-ex0109-069.remarq.com>
The networks are stored as lists containing structures
defining the connections, my algorithms are probably not
perfect (with regards to optomising speed) the real problem
lies with LISP itself, If I define a simple algorithm in say
C for example, and run them back to back, the C program will
on average take about half the time the LISP one needs to
run. I'm currently just running from the editor into
toploop, but was hoping that the file could be compiled into
an .exe or something to speed it up.
With regards to my data structures, a reccurent network of
10 nodes, with an average connectivity of say 5, evolving
with a population of about 30 using a SAGA style GA for
about 100,000 generations, meas that the node summing
section allone has been executed 150,000,000 times Thus it
takes ages.


* Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful
From: Robert Monfera
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <3884915B.44BBB6C6@fisec.com>
Tony wrote:

> The networks are stored as lists containing structures
> defining the connections,

Consider using arrays and declarations (there are lots of examples if
you watch this newsgroup).

> I'm currently just running from the editor into
> toploop, but was hoping that the file could be compiled into
> an .exe or something to speed it up.

You could compile your function:

> (defun foo ...)

> (compile 'foo)

> (time (foo ...))

If you show parts of the code, you will more likely get specific
suggestions about data structures and declarations.

Regards
Robert
From: Tony
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <040f1118.94ec5ed7@usw-ex0109-069.remarq.com>
Some of the data is contained in simple-vectors, other bits
in arrays, within the structures, but the use of arrays is
wastefull in this situation as the ultimate size is unknown.


* Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful
From: Erik Naggum
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <3157208830761229@naggum.no>
* Tony <············@chice.force9.co.uk.invalid>
| Some of the data is contained in simple-vectors, other bits
| in arrays, within the structures, but the use of arrays is
| wastefull in this situation as the ultimate size is unknown.

  well, you _could_ use adjustable vectors with fill pointers if such is
  your concern.  Common Lisp knows that the tradeoff that people made with
  lists carried a performance penalty.  look at VECTOR-PUSH and VECTOR-POP
  for ways to deal with unknown sizes and still get O(1) access times.

#:Erik
From: Tony
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <02867139.9cb43b3c@usw-ex0110-073.remarq.com>
I've just put up some LISP code for a Dynamic Reccurent
Neural Net, so you can see how my data structures work.
My code is probably horrible to look at, but I've semi
documented it so it shouldn't be to bad. Anyway Let me know
if you think it can be speeded up at all?
Also if you think it can be improved.
Anyway it's all on my home page at... www.chice.force9.co.uk



* Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful
From: Tim Bradshaw
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <ey3ya9l7s4g.fsf@cley.com>
* Tony  wrote:
> I've just put up some LISP code for a Dynamic Reccurent
> Neural Net, so you can see how my data structures work.

Here's a rewrite of the start of it.  It uses arrays pervasively, and
doesn't have global variables but rather passes the activity array in
to various functions.  I gave up slightly before the end as I ran out
of time.

CMUCL should run this kind of code really well (albeit I haven't even
tried compiling it other than the Y function so it might be buggy and
/ or need better declarations.


;; Code for a list based Reccurent Neural Network
;; We will need a global list of length = no_of_nodes, which gives the current activation levels of 
;; every node in the network. This list will be updated and accessed by future functions.

;(defun setup_activity (no_of_nodes)
;   (setf *structure* nil)
;   (setf *structure* (cons 1 (dotimes (x no_of_nodes *structure*)
;			       (setf *structure* (cons '0 *structure*))))))

(defun make-activity (n)
  (let ((a (make-array n 
		       :element-type 'single-float
		       :initial-element 0.0s0)))
    (setf (aref a 0) 1.0s0)
    a))

;;(setup_activity 10)
;;>(1 0 0 0 0 0 0 0 0 0 0)
;;(setup_activity 20)
;;>(1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)

;; Starting with the functions to create a single neurone/node
;; I need a to create a structure to hold the weights and any other information about the node.

;(defstruct node
;  weights_list  ;a list of weights, including the bias weight
;  input_list)   ;a list of input locations pointing to positions in *structure* including the bias node


(defstruct (node 
	    (:constructor %make-node))
  weights-array
  input-array)
  
(defun make-node (weights-list input-list)
  (%make-node :weights-array (make-array (length weights-list)
					 :element-type 'single-float
					 :initial-contents weights-list)
	      :input-array (make-array (length input-list)
				       :element-type 'fixnum
				       :initial-contents input-list)))
	      

;(defun mk_node_struct (weights_list input_list)
;   (setf *node* (make-node
;                  :weights_list weights_list
;                  :input_list input_list)))




;; I now need to crate a set of functions to calculate the output of the neurone.
;; function a takes the weights and the input and sums w1*i1+..+wn*in
;; returning the value of a for the given node.

;(defun a (weights_list input_list)
;   (cond ((not (equal (length weights_list)(length input_list))) 'error)
;         ((null weights_list) 0)
;         (t (+ (a (cdr weights_list)(cdr input_list))
;	       (* (car weights_list)(nth (car input_list) *structure*))))))

(defun a (node activity)
  (declare (type node node)
	   (type (simple-array single-float (*)) activity))
  (let ((wa (node-weights-array node))
	(ia (node-input-array node))
	(sum 0.0s0))
    (declare (type (simple-array single-float (*)) wa)
	     (type (simple-array fixnum (*)) ia)
	     (type single-float sum))
    (when (not (= (length wa) (length ia)))
      (error "incompatible thingies"))
    (dotimes (i (length wa) sum)
      (declare (type fixnum i))
      (setf sum (+ sum
		   (* (aref wa i)
		      (aref activity (aref ia i))))))))
	  

;;(a '(1 2 3) '(0 1 5))
;;>1

;; Now we need a sygmoid function to work out y given a as input

;(defun y (a)
;  (/ 1 (+ (exp (- a)) 1)))

(declaim (inline y))

(defun y (a)
  ;; I can't get allegro to compile this well (but I haven't tried
  ;; very hard). CMUCL does really well.
  (declare (type single-float a))
  (/ 1.0s0 (+ (exp (- a)) 1.0s0)))
	       

;;(y 0)
;;> 0.5
;;(y 1)
;;> 0.73105...
;;(y -0.7)
;;> 0.331812227831834

;; Now we need a function that will take the current *node* structure and return y

;(defun Run_node ()
;  (y (a (node-weights_list *node*)(node-input_list *node*))))

(declaim (inline run-node))

(defun run-node (node activity)
  (declare (type node node)
	   (type (simple-array single-float (*)) activity))
  (y (a node activity)))
    


;;(mk_node_struct '(2 3 6 -3) '(0 1 2 3))
;;>#S(NODE WEIGHTS_LIST (2 3 6 -3) INPUT_LIST (0 1 2 3))
;;(run_node)

;;> 0.880797077977882

;; Right, now that we have functions to work out the output of a given node, we now need to update 
;; the *structure* with this new output value, and go through for each node

;(defun full_pass (node_struct_list)
;   (cond ((null node_struct_list) *structure*)
;         (t (dotimes (x (length node_struct_list) *structure*)
;               (setf *node* (nth x node_struct_list))
;	      (setf (nth (+ x 1) *structure*)(run_node))))))

(defun full-pass (node-list activity)
  (declare (type (simple-array fixnum (*)) activity))
  (loop for i of-type fixnum upfrom 0
      for n of-type node in node-list
      do
	(setf (aref activity i) (run-node n activity))))

;;; ... ran out of time ...
From: Raymond Toy
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <4naem14u0s.fsf@rtp.ericsson.se>
>>>>> "Tim" == Tim Bradshaw <···@cley.com> writes:


    Tim> (defun make-activity (n)
    Tim>   (let ((a (make-array n 
    Tim> 		       :element-type 'single-float
    Tim> 		       :initial-element 0.0s0)))
    Tim>     (setf (aref a 0) 1.0s0)
    Tim>     a))

[snip]
    Tim> (defun y (a)
    Tim>   ;; I can't get allegro to compile this well (but I haven't tried
    Tim>   ;; very hard). CMUCL does really well.
    Tim>   (declare (type single-float a))
    Tim>   (/ 1.0s0 (+ (exp (- a)) 1.0s0)))
	       

A common typo:  1.0s0 is a short-float, not a single-float.  This
doesn't matter to CMUCL since short-float and single-float are the
same, but it might matter to others.

Ray
From: Tim Bradshaw
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <ey3r9fd7jyz.fsf@cley.com>
* Raymond Toy wrote:

> A common typo:  1.0s0 is a short-float, not a single-float.  This
> doesn't matter to CMUCL since short-float and single-float are the
> same, but it might matter to others.

Are there any common platforms where short and single floats are
different?  (I'm not trying to deny it was a mistake, I can never
remember what the syntax for a short-float is though...)

--tim
From: Raymond Toy
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <4nya9l38oh.fsf@rtp.ericsson.se>
>>>>> "Tim" == Tim Bradshaw <···@cley.com> writes:

    Tim> * Raymond Toy wrote:
    >> A common typo:  1.0s0 is a short-float, not a single-float.  This
    >> doesn't matter to CMUCL since short-float and single-float are the
    >> same, but it might matter to others.

    Tim> Are there any common platforms where short and single floats are
    Tim> different?  (I'm not trying to deny it was a mistake, I can never
    Tim> remember what the syntax for a short-float is though...)

Likewise.

CLISP does:

[1]> (float-digits 1s0)
17
[2]> (float-digits 1f0)
24


Don't know of any others.

Ray
From: Thomas A. Russ
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <ymiwvp41fyf.fsf@sevak.isi.edu>
>>>>> "Tim" == Tim Bradshaw <···@cley.com> writes:
> 
>     Tim> * Raymond Toy wrote:
>     >> A common typo:  1.0s0 is a short-float, not a single-float.  This
>     >> doesn't matter to CMUCL since short-float and single-float are the
>     >> same, but it might matter to others.
> 
>     Tim> Are there any common platforms where short and single floats are
>     Tim> different?  (I'm not trying to deny it was a mistake, I can never
>     Tim> remember what the syntax for a short-float is though...)

In MCL (on a PowerPC) a short float has float-digits = 24 digits
single, double and long floats all have float-digits = 53 digits
single, double and long are all implemented as double-floats
  This is another case where the short and single are different sizes.

IIRC the Lispm could fit a short float as an immediate data type into a
single machine word, so it would have had a different size for a short
float versus a single-float.

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Tim Bradshaw
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <ey3vh4o5cmq.fsf@cley.com>
* Thomas A Russ wrote:

> IIRC the Lispm could fit a short float as an immediate data type into a
> single machine word, so it would have had a different size for a short
> float versus a single-float.

No, single-float and short-float are the same type on LispMs (at least
Symbolics Ivory and I'm 98% sure on 36xx too).  They are both IEEE
single-precision quantities fitting in 32 bits (which fits in a machine
word including tag on a lispm).

However from other posts I was comprehensively wrong about single and
short floats being the same on all platforms.  I will have to change
my ways.

--tim
From: Raymond Toy
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <4n66wn2w71.fsf@rtp.ericsson.se>
>>>>> "Tim" == Tim Bradshaw <···@cley.com> writes:

    Tim> However from other posts I was comprehensively wrong about single and
    Tim> short floats being the same on all platforms.  I will have to change
    Tim> my ways.

One other thing that gets me is that the type of pi depends on the
platform.  I usually use CMUCL on Sparc where pi is a double-float.
However, CMUCL on x86 (experimental version) pi is a long-float and
all of my carefully, painstakingly declared routines for optimum speed
don't work anymore. :-)

Ray
From: Tim Bradshaw
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <ey3r9fb3wsk.fsf@cley.com>
* Raymond Toy wrote:
>>>>>> "Tim" == Tim Bradshaw <···@cley.com> writes:
Tim> However from other posts I was comprehensively wrong about single and
Tim> short floats being the same on all platforms.  I will have to change
Tim> my ways.

> One other thing that gets me is that the type of pi depends on the
> platform.  I usually use CMUCL on Sparc where pi is a double-float.
> However, CMUCL on x86 (experimental version) pi is a long-float and
> all of my carefully, painstakingly declared routines for optimum speed
> don't work anymore. :-)

That can only not be a bug if double-float is the same type as
long-float on the sparc port!

--tim
From: Raymond Toy
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <4nr9fb0z5l.fsf@rtp.ericsson.se>
>>>>> "Tim" == Tim Bradshaw <···@cley.com> writes:

    Tim> * Raymond Toy wrote:
    >> One other thing that gets me is that the type of pi depends on the
    >> platform.  I usually use CMUCL on Sparc where pi is a double-float.
    >> However, CMUCL on x86 (experimental version) pi is a long-float and
    >> all of my carefully, painstakingly declared routines for optimum speed
    >> don't work anymore. :-)

    Tim> That can only not be a bug if double-float is the same type as
    Tim> long-float on the sparc port!

Yes, I neglected to mention that on the sparc port double-float and
long-float are the same.  (However, there has been some work to make
long-float on the sparc port use the quad-precision floats available
on the Ultrasparc.  However, quad floats are all done in software so
I'm not much interested in that; too slow.)

Ray
From: Roger Corman
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <38876faf.265264099@nntp.best.com>
On 19 Jan 2000 22:52:20 +0000, Tim Bradshaw <···@cley.com> wrote:

>* Raymond Toy wrote:
>
>> A common typo:  1.0s0 is a short-float, not a single-float.  This
>> doesn't matter to CMUCL since short-float and single-float are the
>> same, but it might matter to others.
>
>Are there any common platforms where short and single floats are
>different?  (I'm not trying to deny it was a mistake, I can never
>remember what the syntax for a short-float is though...)
In Corman Lisp (1.3 and later), a short float fits into a tagged word,
like a fixnum, and is 30 bits. A single float is an IEEE 32-bit float.
A double-float is IEEE 64-bits. Of course the short float is efficient
to pass and store (no boxing) but the math operators all convert it
to/from 32-bits for operations. It basically just rounds off the last
two bits of the mantissa to get from 32-bits to 30.

Roger Corman
From: Tony
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <0f5b83e7.f72366b5@usw-ex0109-066.remarq.com>
The weight values need to be as accurate as posible thus a
double float should be used instead of a short or single.
The reason I have used global variables is simple, the
network is run allongside other networks, whose outputs can
effect this network. I have done this by creating a list of
networks to be executed for one move each before any make a
second move. The genetic algorithm must also be able to
mutate a network without damaging others. If I use pointers
at this network level, then mutating one network seems to
mutate them all. The simple way around this is to use global
declarations, Only ever working on the current global, and
resetting it from a list of netowrks.


* Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful
From: Marco Antoniotti
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <lw3drrdbfe.fsf@parades.rm.cnr.it>
Just spotted a NTH.  You are in for big efficiency improvements.

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Tim Bradshaw
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <ey3ln5j60xu.fsf@cley.com>
* Tony  wrote:
> The weight values need to be as accurate as posible thus a
> double float should be used instead of a short or single.

I'm kind of surprised you reallt need that kind of accuracy, but you
can just change the declarations to doubles.

> The simple way around this is to use global
> declarations, Only ever working on the current global, and
> resetting it from a list of netowrks.

A better way would be to fix the bug though.

--tim
From: R. Matthew Emerson
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <87ogahsh5c.fsf@nightfly.apk.net>
Tim Bradshaw <···@cley.com> writes:

> * Raymond Toy wrote:
> 
> > A common typo:  1.0s0 is a short-float, not a single-float.  This
> > doesn't matter to CMUCL since short-float and single-float are the
> > same, but it might matter to others.
> 
> Are there any common platforms where short and single floats are
> different?  (I'm not trying to deny it was a mistake, I can never
> remember what the syntax for a short-float is though...)
> 

[I posted this from work already, but it looks like it didn't make it out.]

In Macintosh Common Lisp, as of version 4.1, a short-float is an
IEEE single precision quantity (32 bits).  The rest of the float
types (single-float, double-float, long-float) are IEEE double
precision.

-matt
From: Bruce Tobin
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <3884A125.220C44AB@columbus.rr.com>
Tony wrote:
> 
> If I define a simple algorithm in say
> C for example, and run them back to back, the C program will
> on average take about half the time the LISP one needs to
> run.

Well, that's about right, or it was the last time I ran similar tests.
You can (or could, at that time) get a 100% speedup by going to
ACL/Linux, at which point you'll be slightly faster than a g++
implementation of a lispy algorithm, but maybe not as fast as a gcc
implementation of same.  Shouldn't be a lot of difference, though.

> I'm currently just running from the editor into
> toploop, but was hoping that the file could be compiled into
> an .exe or something to speed it up.

Well, if you're not compiling before you run, you could see a dramatic
speedup by compiling.  However, I wouldn't expect you to benchmark as
well against a C implementation if you weren't compiling somehow.  What
version of Allegro are you using?  ACL 3.x compiles automatically, but
it doesn't optimize as well as more recent versions.  If that's what
you're using you should switch to something newer, and then make sure
you compile.
From: Marco Antoniotti
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <lwwvp6qtd0.fsf@parades.rm.cnr.it>
Tony <············@chice.force9.co.uk.invalid> writes:

> The networks are stored as lists containing structures
> defining the connections,

How many connections?  Do you always run through all the connections
(DOLIST) or do you occasionally need to access the i-th connection to
do something with it?

> my algorithms are probably not
> perfect (with regards to optomising speed) the real problem
> lies with LISP itself, If I define a simple algorithm in say
> C for example, and run them back to back, the C program will
> on average take about half the time the LISP one needs to
> run.

Do you use 'lists' in C as well?  Or do you 'malloc' an array to
represent your connections?

> I'm currently just running from the editor into
> toploop, but was hoping that the file could be compiled into
> an .exe or something to speed it up.

> With regards to my data structures, a reccurent network of
> 10 nodes, with an average connectivity of say 5, evolving
> with a population of about 30 using a SAGA style GA for
> about 100,000 generations, meas that the node summing
> section allone has been executed 150,000,000 times Thus it
> takes ages.

Apologies, but this does not mean much to me.  Not being a NN expert I
can only surmise the graph structure of the net, but not much beyond
it.  If you posted some code, the newsgroup could be more helpful.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Robert Posey
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <3885D4F7.3D3C6E18@raytheon.com>
Marco Antoniotti wrote:
> 
> Tony <············@chice.force9.co.uk.invalid> writes:
> 
> > The networks are stored as lists containing structures
> > defining the connections,
> 
> How many connections?  Do you always run through all the connections
> (DOLIST) or do you occasionally need to access the i-th connection to
> do something with it?
> 
> > my algorithms are probably not
> > perfect (with regards to optomising speed) the real problem
> > lies with LISP itself, If I define a simple algorithm in say
> > C for example, and run them back to back, the C program will
> > on average take about half the time the LISP one needs to
> > run.
> 
> Do you use 'lists' in C as well?  Or do you 'malloc' an array to
> represent your connections?

You can do lists fine in STL if you go to C++, and they support 
inserts, deletion and searches fairly well.

>
From: Chuck Fry
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <3885ee09$0$201@nntp1.ba.best.com>
In article <·················@raytheon.com>,
Robert Posey  <·····@raytheon.com> wrote:
>Marco Antoniotti wrote:
>> Do you use 'lists' in C as well?  Or do you 'malloc' an array to
>> represent your connections?
>
>You can do lists fine in STL if you go to C++, and they support 
>inserts, deletion and searches fairly well.

If memory serves, STL's lists are doubly-linked.  They're also a pain in
the *ss to use with heterogeneous data.

 -- Chuck
--
	    Chuck Fry -- Jack of all trades, master of none
 ······@chucko.com (text only please)  ········@home.com (MIME enabled)
Lisp bigot, car nut, photographer, sometime guitarist and mountain biker
The addresses above are real.  All spammers will be reported to their ISPs.
From: Marco Antoniotti
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <lwn1q2vuuf.fsf@parades.rm.cnr.it>
Robert Posey <·····@raytheon.com> writes:

> Marco Antoniotti wrote:
> > 
> > Tony <············@chice.force9.co.uk.invalid> writes:
> > 
> > > The networks are stored as lists containing structures
> > > defining the connections,
> > 
> > How many connections?  Do you always run through all the connections
> > (DOLIST) or do you occasionally need to access the i-th connection to
> > do something with it?
> > 
> > > my algorithms are probably not
> > > perfect (with regards to optomising speed) the real problem
> > > lies with LISP itself, If I define a simple algorithm in say
> > > C for example, and run them back to back, the C program will
> > > on average take about half the time the LISP one needs to
> > > run.
> > 
> > Do you use 'lists' in C as well?  Or do you 'malloc' an array to
> > represent your connections?
> 
> You can do lists fine in STL if you go to C++, and they support 
> inserts, deletion and searches fairly well.

Why go C++/STL when you can have the real thing? :)

Apart from that, one of the most common pifalls of Lisp vs C
comparisons is that the data structures used in C are different from
those used in Lisp.  If I use a 'malloc-ed' array in C, I am bound to
get better performance than a list in Lisp, especially if you access
the i-th element out of an ordered traversal.

I bet that using lists with C++/STL would result in larger code and
slower as well w.r.t. an equivalent C/C++ array based implementation
(not to speak of any comparison with Lisp).

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Matt Wette
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <7k1z7dhhzc.fsf@jpl.nasa.gov>
Marco Antoniotti <·······@parades.rm.cnr.it> writes:

> Why go C++/STL when you can have the real thing? :)
> 
> Apart from that, one of the most common pifalls of Lisp vs C
> comparisons is that the data structures used in C are different from
> those used in Lisp.  If I use a 'malloc-ed' array in C, I am bound to
> get better performance than a list in Lisp, especially if you access
> the i-th element out of an ordered traversal.
> 
> I bet that using lists with C++/STL would result in larger code and
> slower as well w.r.t. an equivalent C/C++ array based implementation
> (not to speak of any comparison with Lisp).

Not sure I buy that one.  In C++/STL you can build a linked list
structure where the data is part of the cell:
   -----------------------------------------      -------
  | data | more data | and so forth | next* | -> | data ...
   -----------------------------------------      -------

In LISP, my understanding is that lists are held together with cons
cells:

   ---------------      ---------------
  | data* | next* | -> | data* | next* |
   ---------------      ---------------
      |  
   ---------------------------------
  | data | more data | and so forth |
   ---------------------------------

So once you get your hands on the list cell in C++ you have the data.
In LISP you have one more pointer deference to do.

Can you tell me your rationale for the LISP speedup?

Matt
-- 
Matthew.R.Wette at jpl.nasa.gov -- I speak for myself, not for JPL.
From: Roger Corman
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <38877118.265625599@nntp.best.com>
On 19 Jan 2000 13:25:11 -0800, Matt Wette
<···············@jpl.nasa.gov> wrote:

>So once you get your hands on the list cell in C++ you have the data.
>In LISP you have one more pointer deference to do.
>
>Can you tell me your rationale for the LISP speedup?
Well, if the data is a fixnum or something else small it will be
stored in the cell, not a pointer (for many versions of lisp).
Also, you can certainly link structs together in Lisp, which does just
what you are doing in C++ (stores data and pointer in the same heap
object).

Hoever, I don't think the issue of a pointer dereference is worth
thinking about. In my experience Lisp is much faster than C++ for list
management for several reasons.

Memory management: lisp systems are optimized to allocate small pieces
for lists, unlike "new" or "malloc" which are usually far slower than
CONS. Also, CONS will often be inlined, even further speeding things
up. (I have never seen new or malloc inlined in C or C++).
A good garbage collector should be able to free large numbers of list
elements much faster than calling free() on each one explicitly.

Because lisp is designed and optimized for list processing, operations
which traverse and alter list structures tend to be highly optimized
and inlined. Many standard lisp benchmarks stress these operations,
and compiler vendors have responded by using every possible trick to
make lists as fast as possible.

Of course in C++ you could write your own memory management functions,
and write your own macros for handling lists, and approach the
efficiency of lisp if you spend enough time on it. I think of this as
basically the "I can build Lisp in C++, therefore C++ can do anything
Lisp can do" argument.

Well, I could write a C++ compiler in Lisp...  :-)

Roger Corman
From: Raymond Wiker
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <87iu0pwa4q.fsf@foobar.orion.no>
Matt Wette <···············@jpl.nasa.gov> writes:

> Marco Antoniotti <·······@parades.rm.cnr.it> writes:
> 
> > Why go C++/STL when you can have the real thing? :)
> > 
> > Apart from that, one of the most common pifalls of Lisp vs C
> > comparisons is that the data structures used in C are different from
> > those used in Lisp.  If I use a 'malloc-ed' array in C, I am bound to
> > get better performance than a list in Lisp, especially if you access
> > the i-th element out of an ordered traversal.
> > 
> > I bet that using lists with C++/STL would result in larger code and
> > slower as well w.r.t. an equivalent C/C++ array based implementation
> > (not to speak of any comparison with Lisp).
> 
> Not sure I buy that one.  In C++/STL you can build a linked list
> structure where the data is part of the cell:
>    -----------------------------------------      -------
>   | data | more data | and so forth | next* | -> | data ...
>    -----------------------------------------      -------

        Hum. This is only the case when the data type is primitive or
has a copy constructor, and that means that you'll end up copying the
data value(s) into the list, and possibly also when doing a read
access. In some cases you may want to use a list of pointers instead,
in which case you end up with a similar list structure to what LISP
uses...

        So, the speedup for LISP is actually quite plausible.

        //Raymond.

-- 
Raymond Wiker, Orion Systems AS
+47 370 61150
From: Marco Antoniotti
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <lwr9fa9lc6.fsf@parades.rm.cnr.it>
Matt Wette <···············@jpl.nasa.gov> writes:

> Marco Antoniotti <·······@parades.rm.cnr.it> writes:
> 
> > Why go C++/STL when you can have the real thing? :)
> > 
> > Apart from that, one of the most common pifalls of Lisp vs C
> > comparisons is that the data structures used in C are different from
> > those used in Lisp.  If I use a 'malloc-ed' array in C, I am bound to
> > get better performance than a list in Lisp, especially if you access
> > the i-th element out of an ordered traversal.
> > 
> > I bet that using lists with C++/STL would result in larger code and
> > slower as well w.r.t. an equivalent C/C++ array based implementation
> > (not to speak of any comparison with Lisp).
> 
> Not sure I buy that one.  In C++/STL you can build a linked list
> structure where the data is part of the cell:
>    -----------------------------------------      -------
>   | data | more data | and so forth | next* | -> | data ...
>    -----------------------------------------      -------
> 
> In LISP, my understanding is that lists are held together with cons
> cells:
> 
>    ---------------      ---------------
>   | data* | next* | -> | data* | next* |
>    ---------------      ---------------
>       |  
>    ---------------------------------
>   | data | more data | and so forth |
>    ---------------------------------
> 
> So once you get your hands on the list cell in C++ you have the
> data.

Which is usually a pointer to some data structure. Especially if you
want a list of heterogeneous items or just a list of items deriving
from the same class.  For 'simple' types usually you do not need lists
and you use (adjustable) arrays in CL.

> In LISP you have one more pointer deference to do.

I bet common practice leads to the extra pointer as well in C++.
Moreover, lists in the STL are doubly linked. Hence you get an extra
pointer per cell.

> Can you tell me your rationale for the LISP speedup?

You spend less time programming.  Time to market improves, you win.
Here is the speedup :)

Besides, Lisp is the real thing. :)

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Andy Freeman
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <86e38q$1g$1@nnrp1.deja.com>
In article <··············@parades.rm.cnr.it>,
  Marco Antoniotti <·······@parades.rm.cnr.it> wrote:
> You spend less time programming.  Time to market improves, you win.

Or, you spend less time "programming" your data structures and
more time programming your problem, so your product is better
in the same amount of time.  (Big speedups are found in algorithms
and other "problem" programming, not in reducing the cost of
allocating/deallocating an array.)

For example, if you spend more than 0.1% of your time worrying
about/programming memory allocation, you're behind folks who get
to use GC.

It's like generalizing the notion that any program with printf
can be sped up by additional programming to replace the printfs
with "optimized" i/o routines.  However, doing so is almost always
a bad idea even with crappy printf implementations - the benefit
is swamped by the costs.

-andy


Sent via Deja.com http://www.deja.com/
Before you buy.
From: John Watton
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <8629o6$grr$1@nnrp1.deja.com>
In article <·················@usw-ex0110-076.remarq.com>,
  Tony <············@chice.force9.co.uk.invalid> wrote:
> I use lisp largley to write code for evloving Neural
> Networks which can be a very time consuming task. I'm
> currently running Alegro LISP in a Win 95 environment, and
> my programs take days (sometimes weeks) to run. Is there any
> way to speed things up a bit?

If you have ACL 4.3 or greater then you should try profiling to
determine where your bottlenecks are. See:

http://www.franz.com/support/docs/5.0/doc/cl/profiling.htm

Once you know where the bottlenecks are then you can attack with
optimizations to the algorithms and/or declarations.

> Oh, and does anyone know whether Alegro LISP would work
> under Linux or would I need a different version?

You will need a different version. See www.franz.com.

--
John Watton
Alcoa Inc.


Sent via Deja.com http://www.deja.com/
Before you buy.
From: Noah Kleiman
Subject: Re: Speed, Faster... Faster... Slower?
Date: 
Message-ID: <38856BD3.89EE979@cats.ucsc.edu>
You can get ACL lite for linux free from franz ...
and it works.. i've been using it...

have you compiled your funtions ? if you don't it really slows things down...



Tony wrote:

> I use lisp largley to write code for evloving Neural
> Networks which can be a very time consuming task. I'm
> currently running Alegro LISP in a Win 95 environment, and
> my programs take days (sometimes weeks) to run. Is there any
> way to speed things up a bit?
>
> Oh, and does anyone know whether Alegro LISP would work
> under Linux or would I need a different version?
>
> * Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful