Previous topic

scipy.optimize.fmin

Next topic

scipy.optimize.fmin_cg

scipy.optimize.fmin_powell

scipy.optimize.fmin_powell(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None)[source]

Minimize a function using modified Powell’s method. This method only uses function values, not derivatives.

Parameters:

func : callable f(x,*args)

Objective function to be minimized.

x0 : ndarray

Initial guess.

args : tuple, optional

Extra arguments passed to func.

callback : callable, optional

An optional user-supplied function, called after each iteration. Called as callback(xk), where xk is the current parameter vector.

direc : ndarray, optional

Initial direction set.

xtol : float, optional

Line-search error tolerance.

ftol : float, optional

Relative error in func(xopt) acceptable for convergence.

maxiter : int, optional

Maximum number of iterations to perform.

maxfun : int, optional

Maximum number of function evaluations to make.

full_output : bool, optional

If True, fopt, xi, direc, iter, funcalls, and warnflag are returned.

disp : bool, optional

If True, print convergence messages.

retall : bool, optional

If True, return a list of the solution at each iteration.

Returns:

xopt : ndarray

Parameter which minimizes func.

fopt : number

Value of function at minimum: fopt = func(xopt).

direc : ndarray

Current direction set.

iter : int

Number of iterations.

funcalls : int

Number of function calls made.

warnflag : int

Integer warning flag:

1 : Maximum number of function evaluations. 2 : Maximum number of iterations.

allvecs : list

List of solutions at each iteration.

See also

minimize
Interface to unconstrained minimization algorithms for multivariate functions. See the ‘Powell’ method in particular.

Notes

Uses a modification of Powell’s method to find the minimum of a function of N variables. Powell’s method is a conjugate direction method.

The algorithm has two loops. The outer loop merely iterates over the inner loop. The inner loop minimizes over each current direction in the direction set. At the end of the inner loop, if certain conditions are met, the direction that gave the largest decrease is dropped and replaced with the difference between the current estimated x and the estimated x from the beginning of the inner-loop.

The technical conditions for replacing the direction of greatest increase amount to checking that

  1. No further gain can be made along the direction of greatest increase from that iteration.
  2. The direction of greatest increase accounted for a large sufficient fraction of the decrease in the function value from that iteration of the inner loop.

References

Powell M.J.D. (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives, Computer Journal, 7 (2):155-162.

Press W., Teukolsky S.A., Vetterling W.T., and Flannery B.P.: Numerical Recipes (any edition), Cambridge University Press