scipy.optimize.fmin_bfgs

scipy.optimize.fmin_bfgs(f, x0, fprime=None, args=(), gtol=1e-05, norm=inf, epsilon=1.4901161193847656e-08, maxiter=None, full_output=0, disp=1, retall=0, callback=None)[source]

Minimize a function using the BFGS algorithm.

Parameters:

f : callable f(x,*args)

Objective function to be minimized.

x0 : ndarray

Initial guess.

fprime : callable f’(x,*args), optional

Gradient of f.

args : tuple, optional

Extra arguments passed to f and fprime.

gtol : float, optional

Gradient norm must be less than gtol before successful termination.

norm : float, optional

Order of norm (Inf is max, -Inf is min)

epsilon : int or ndarray, optional

If fprime is approximated, use this value for the step size.

callback : callable, optional

An optional user-supplied function to call after each iteration. Called as callback(xk), where xk is the current parameter vector.

maxiter : int, optional

Maximum number of iterations to perform.

full_output : bool, optional

If True,return fopt, func_calls, grad_calls, and warnflag in addition to xopt.

disp : bool, optional

Print convergence message if True.

retall : bool, optional

Return a list of results at each iteration if True.

Returns:

xopt : ndarray

Parameters which minimize f, i.e. f(xopt) == fopt.

fopt : float

Minimum value.

gopt : ndarray

Value of gradient at minimum, f’(xopt), which should be near 0.

Bopt : ndarray

Value of 1/f’‘(xopt), i.e. the inverse hessian matrix.

func_calls : int

Number of function_calls made.

grad_calls : int

Number of gradient calls made.

warnflag : integer

1 : Maximum number of iterations exceeded. 2 : Gradient and/or function calls not changing.

allvecs : list

OptimizeResult at each iteration. Only returned if retall is True.

See also

minimize
Interface to minimization algorithms for multivariate functions. See the ‘BFGS’ method in particular.

Notes

Optimize the function, f, whose gradient is given by fprime using the quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS)

References

Wright, and Nocedal ‘Numerical Optimization’, 1999, pg. 198.