From: Frederick Crabbe
Subject: Brent's Method?
Date: 
Message-ID: <wzc3cjhtgkz.fsf@crab.cs.usna.edu>
Say, does anyone have a CL implementation of Brent's Method for
finding the minimum (or max) of a function of one variable?  I would
have thought it could be found in the repositories, but alas, my
search failed..

cheers,
ric

From: Pascal Bourguignon
Subject: Re: Brent's Method?
Date: 
Message-ID: <87el31us4v.fsf@thalassa.informatimago.com>
Frederick Crabbe <······@usna.edu> writes:

> Say, does anyone have a CL implementation of Brent's Method for
> finding the minimum (or max) of a function of one variable?  I would
> have thought it could be found in the repositories, but alas, my
> search failed..
> 
> cheers,
> ric

Why should there be a library for such a simple function?

Will you be able to use it?

You may want to optimize by getting the constants out of BRENT-STEP...



(DEFUN BRENT-STEP(F X1 X2 X3)
  "
URL:    http://mathworld.wolfram.com/BrentsMethod.html
RETURN: Subsequent root estimate.
"
  (LET* ((F.X1 (FUNCALL F X1))
         (F.X2 (FUNCALL F X2))
         (F.X3 (FUNCALL F X3))
         (RR    (/ F.X2 F.X3))
         (SS    (/ F.X2 F.X1))
         (TT    (/ F.X1 F.X3))
         (P    (* SS (- (* RR (- RR TT) (- X2 X3)) (- 1 RR) (- X2 X1))))
         (Q    (* (- TT 1) (- RR 1) (- SS 1))))
    (+ X2 (/ P Q))))



(DEFUN FIXED-POINT (FUN START ENOUGH)
  "
DO:         Compute the fixed-point of the function FUN, starting with
            the value START, the function ENOUGH indicating when we're
            close enough.
RETURN:     The fixed-point.
"
  (DO* ((PREV START CURR)
        (CURR (FUNCALL FUN PREV) (FUNCALL FUN PREV)))
      ((FUNCALL ENOUGH CURR PREV) CURR)))


(DEFUN BRENT (F X1 X2 X3 &OPTIONAL (EPSILON 0.000001))
  (FIXED-POINT (LAMBDA (X) (BRENT-STEP F X1 X X3))
               X2
               (LAMBDA (OLD NEW) (< (ABS (- OLD NEW)) EPSILON))))



-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Frederick Crabbe
Subject: Re: Brent's Method?
Date: 
Message-ID: <wzcwugtrsxb.fsf@crab.cs.usna.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> Frederick Crabbe <······@usna.edu> writes:
> 
> > Say, does anyone have a CL implementation of Brent's Method for
> > finding the minimum (or max) of a function of one variable?  I would
> > have thought it could be found in the repositories, but alas, my
> > search failed..
> > 
> > cheers,
> > ric
> 
> Why should there be a library for such a simple function?
> 

Ah, we have some semantic confusion.  Since numerical methods are not
my forte, the fault is most likely mine.

1) By "Brent's Method" I mean, not the root finder, but the minimum
finder.  Not a big difference until you consider 2)...

2) According to Numerical Recipes, "No minimization scheme that
depends solely on (10.2.1 [the parabolic interpolation equation]) is
likely to succeed in practice.  The exacting task is to invent a
scheme that relies on a sure-but-slow technique, like golden section
search, when the function is not cooperative, but that switches over
to (10.2.1) when the function allows. The task is nontrivial for
several reasons, including these: (i) The housekeeping needed to avoid
unnecessary function evaluations in switching between the two methods
can be complicated. (ii) Careful attention must be given to the
endgame, where the function is being evaluated very near to the
roundoff limit of equation (10.1.2). (iii) The scheme for detecting a
cooperative versus noncooperative function must be very robust."

-p.403 in C, p.407 in C++

They then go on to suggest that Brent's is the thing that solves all
of these tasks and is therefore very complicated.  And of course,
their C code is absolutely impenetrable.

Obviously, using inverse parabolic interpolation by itself would be
easy, I was instead scared off by the above paragraph.

Since, I've found a very nice description in the discussion of
gsl_min_fminimizer_brent  in the GNU Scientific Library.

"The outline of the algorithm can be summarized as follows: on each
iteration Brent's method approximates the function using an
interpolating parabola through three existing points. The minimum of
the parabola is taken as a guess for the minimum. If it lies within
the bounds of the current interval then the interpolating point is
accepted, and used to generate a smaller interval. If the
interpolating point is not accepted then the algorithm falls back to
an ordinary golden section step. The full details of Brent's method
include some additional checks to improve convergence."

So, forgive me my uninformed query.  I'll go implement now.

cheers,
ric

P.S.  I still think it would be nice to collect all of these numerical
functions together, even if they ARE simple.  Every time I need one,
it takes me a day to figure out what the function DOES.  I should have
paid more attention in school. *sigh*
From: see.signature
Subject: Re: Brent's Method?
Date: 
Message-ID: <slrnbc6k3o.60.anyone@Flex111.dNWL.WAU.NL>
>> > Say, does anyone have a CL implementation of Brent's Method for
>> > finding the minimum (or max) of a function of one variable?  I would
>> > have thought it could be found in the repositories, but alas, my
>> > search failed..
>> > 
>> > cheers,
>> > ric
>> 
Have a look in www.xlispstat.org,  there is a translation to lisp of
code from numerical recipees, and a brent root finder.

Marc

-- 
------------------------------------------------------------------------------
email: marc dot hoffmann at users dot whh dot wau dot nl
------------------------------------------------------------------------------
From: David Combs
Subject: Re: Brent's Method?
Date: 
Message-ID: <bbk3t5$9a3$1@reader1.panix.com>
In article <····················@Flex111.dNWL.WAU.NL>,
see.signature <see.signature> wrote:
>>> > Say, does anyone have a CL implementation of Brent's Method for
>>> > finding the minimum (or max) of a function of one variable?  I would
>>> > have thought it could be found in the repositories, but alas, my
>>> > search failed..

I think that if you look at some "good" multi-dimensional
minimization software, you'll find that within it they
use either Brent's zero-finder or improvements on it.

I recall from long ago, probably imperfectly, that 
the typical name for such an inner, single-dim zero-finder,
is a "line-search" routine, algorithm, whatever.

----

Brent's algorithm -- a long, long, long ago.  I used
to get a now-long-gone computer magazine/journal named,
I think, "the computer journal", from the UK.

His routine showed up in maybe 1971.

---

Where is he now, Canberra, Australian National University?

Long time ago.

David
From: Prof Ric Crabbe
Subject: Re: Brent's Method?
Date: 
Message-ID: <wzcd6hqooml.fsf@crab.cs.usna.edu>
·······@panix.com (David Combs) writes:
> In a attribution long lost, Ric Crabbe wrote:
> >>> > Say, does anyone have a CL implementation of Brent's Method for
> >>> > finding the minimum (or max) of a function of one variable?  I would
> >>> > have thought it could be found in the repositories, but alas, my
> >>> > search failed..
> 
> I think that if you look at some "good" multi-dimensional
> minimization software, you'll find that within it they
> use either Brent's zero-finder or improvements on it.
> 

I was going to let this drop, but since it was brought up again.

As I mentioned in an earlier post, there are 2 beasts, Brent's minimum
finder and Brent's zero finder.  If you can find the deriv., you can
use the zero-finder to find a min, but sometimes you can't easily get
the derivative, and this is not a constraint on Brent's min finder.

The idea of Brent's min finder is pretty straightforward, but it has
some problems in that is is not actually guaranteed to improve its
bracketing at each iteration.  The basic technique is to check for
this and back off to something else, like a golden section search.

Anyway, in case anyone else ever wants this, here is my version.  I am
of course happy to discuss any opportunities for improvement people
see. 

cheers,
ric

P.S.  I actually re-tuned this to find max, not a min.  Brent's
doesn't actually care, it will find either a max or a min depending on
the arguments, but the golden section search is particular.


;;;Finds max using Brent's method
(defun brent-step(a fa b fb c fc)
  "
given:  a b & c, points that bracket a minumum, plus the function 
do:     determine a parabola based on a b & c, and find x, the max of the
        parabola.  If something goes wrong, throws a brent-failure.
        http://www.library.cornell.edu/nr/bookcpdf/c10-2.pdf
        because it works off the parabola, if f(b)> f(a) & f(c), it finds max,
        if f(b) < f(a)&f(c) it finds min.
return: a b c, the new bracket.
"
  (let* ((BA (- b a))
	 (fBC (- fb fc))
	 (BC (- b c))
	 (fBA (- fb fa))
	 (den  (- (* 2 BA fBC) (* 2 BC fBA))))
    (when (zerop den) (throw 'brent-failure nil))
    (let ((x (- b (/ (- (* BA BA fBC) (* BC BC fBA)) den))))
      (when (or (< x a) (> x c) (= x #.EXCL::*NAN-DOUBLE*)) 
	(throw 'brent-failure nil))
      (if (< x b) (values a x b)
	(values b x c)))))


(defun golden-section-step (f a b c fb)
  "
do:      Perform a step of the golden section search.
        This one has to be different for max or min.
return:  x,f(x).
"
  ;;pick x 0.38197 % of the way into the larger section.
  (let* ((ba (- b a))
	 (cb (- c b)))
    (if (> ba cb)			;pick left
	(let ((x (- b (* 0.38196601 ba))))
	  (if (< (funcall f x) fb)	;low
	      (values x b c)
	    (values a x b )))		;high
      (let ((x (+ b (* 0.38196601 cb))));pick right
	(if (< (funcall f x) fb)	;low
	    (values a b x)
	  (values b x c))))))

(defun brent (f a b c &optional (epsilon 0.00001))
  "
given:  a function and 3 x values a,b,c.  these values must 'bracket' a max:
        a<b<c & f(a) < f(b) & f(c) < f(b)
do:     find a max between a and c
return: the location of the max
"
  (do* ((olda 0)(oldb 0)(oldc 0)
	(fa (funcall f a) (funcall f a)) ;we over evaluate f(n), memoize it(?)
	(fb (funcall f b) (funcall f b))
	(fc (funcall f c) (funcall f c)))
      ((< (+ (abs (- olda a)) (abs (- oldb b)) (abs (- oldc c))) epsilon) b)
    (setq olda a oldb b oldc c)
    (unless (catch 'brent-failure	;let brent-step decide if it screwed up
	      (multiple-value-setq (a b c) (brent-step a fa b fb c fc)))
      (multiple-value-setq (a b c) (golden-section-step f a b c fb)))))

;;Noone will ever read this comment.
From: David Combs
Subject: Re: Brent's Method?
Date: 
Message-ID: <bch8t6$crs$1@reader1.panix.com>
In article <···············@crab.cs.usna.edu>,
Prof Ric Crabbe  <······@usna.edu> wrote:
>
>·······@panix.com (David Combs) writes:
>> In a attribution long lost, Ric Crabbe wrote:
>> >>> > Say, does anyone have a CL implementation of Brent's Method for
>> >>> > finding the minimum (or max) of a function of one variable?  I would
>> >>> > have thought it could be found in the repositories, but alas, my
>> >>> > search failed..
>> 
>> I think that if you look at some "good" multi-dimensional
>> minimization software, you'll find that within it they
>> use either Brent's zero-finder or improvements on it.
>> 
>
>I was going to let this drop, but since it was brought up again.
>


Thanks for the info!

David
From: Joe Marshall
Subject: Re: Brent's Method?
Date: 
Message-ID: <ptmk2sjt.fsf@ccs.neu.edu>
Frederick Crabbe <······@usna.edu> writes:

> Say, does anyone have a CL implementation of Brent's Method for
> finding the minimum (or max) of a function of one variable?  

There are a lot of good numeric things in `scmutils' that Sussman
wrote for his `Structure and Interpretation of Classical Mechanics'. 
See http://www-swiss.ai.mit.edu/~gjs/6946/linux-install.htm

Here's his implementation of Brent's Method in Scheme.  Converting
this to CL should be trivial.


(define (brent-min f a b eps)
  (let* ((maxcount 100)
	 (small-bugger-factor *sqrt-machine-epsilon*)
	 (g (/ (- 3 (sqrt 5)) 2))
         (a a)
         (b b)
         (x (+ a (* g (- b a))))
         (fx (f x))
         (w x) (fw fx) (v x) (fv fx)
         (d 0) (e 0) (old-e 0) (p 0) (q 0)
         (u 0) (fu 0))
    (let loop ((count 0))
      (if (> count maxcount)
	  (list 'maxcount x fx count) ;failed to converge
	  (let* ((tol (+ (* eps (abs x)) small-bugger-factor))
		 (2tol (* 2 tol))
		 (m (/ (+ a b) 2)))
	    ;; test for convergence
	    (if (< (max (- x a) (- b x)) 2tol)
		(list x fx count)
		(begin
		  (if (> (abs e) tol)
		      (let* ((t1 (* (- x w) (- fx fv)))
			     (t2 (* (- x v) (- fx fw)))
			     (t3 (- (* (- x v) t2) (* (- x w) t1)))
			     (t4 (* 2 (- t2 t1))))
			(set! p (if (positive? t4) (- t3) t3))
			(set! q (abs t4))
			(set! old-e e)
			(set! e d)))
		  (if (and (< (abs p) (abs (* 0.5 q old-e)))
			   (> p (* q (- a x)))
			   (< p (* q (- b x))))
		      ;; parabolic step
		      (begin (set! d (/ p q))
			     (set! u (+ x d))
			     (if (< (min (- u a) (- b u)) 2tol)
				 (set! d (if (< x m) tol (- tol)))))
		      ;;else, golden section step
		      (begin (set! e (if (< x m) (- b x) (- a x)))
			     (set! d (* g e))))
		  (set! u (+ x (if (> (abs d) tol) 
				   d
				   (if (positive? d) tol (- tol)))))
		  (set! fu (f u))
		  (if (<= fu fx)
		      (begin (if (< u x) (set! b x) (set! a x))
			     (set! v w) (set! fv fw)
			     (set! w x) (set! fw fx)
			     (set! x u) (set! fx fu))
		      (begin (if (< u x) (set! a u) (set! b u))
			     (if (or (<= fu fw) (= w x))
				 (begin (set! v w) (set! fv fw)
					(set! w u) (set! fw fu))
				 (if (or (<= fu fv) (= v x) (= v w))
				     (begin (set! v u) (set! fv fu))))))
		  (loop (+ count 1)))))))))