OpenCV  4.1.0
Open Source Computer Vision
Static Public Member Functions | List of all members
cv::ConjGradSolver Class Reference

This class is used to perform the non-linear non-constrained minimization of a function with known gradient,. More...

#include <opencv2/core/optim.hpp>

Inheritance diagram for cv::ConjGradSolver:
cv::MinProblemSolver cv::Algorithm

Static Public Member Functions

static Ptr< ConjGradSolvercreate (const Ptr< MinProblemSolver::Function > &f=Ptr< ConjGradSolver::Function >(), TermCriteria termcrit=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5000, 0.000001))
 This function returns the reference to the ready-to-use ConjGradSolver object.
 

Additional Inherited Members

- Public Member Functions inherited from cv::MinProblemSolver
virtual Ptr< FunctiongetFunction () const =0
 Getter for the optimized function.
 
virtual TermCriteria getTermCriteria () const =0
 Getter for the previously set terminal criteria for this algorithm.
 
virtual double minimize (InputOutputArray x)=0
 actually runs the algorithm and performs the minimization.
 
virtual void setFunction (const Ptr< Function > &f)=0
 Setter for the optimized function.
 
virtual void setTermCriteria (const TermCriteria &termcrit)=0
 Set terminal criteria for solver.
 
- Protected Member Functions inherited from cv::Algorithm
void writeFormat (FileStorage &fs) const
 

Detailed Description

This class is used to perform the non-linear non-constrained minimization of a function with known gradient,.

defined on an n-dimensional Euclidean space, using the Nonlinear Conjugate Gradient method. The implementation was done based on the beautifully clear explanatory article [An Introduction to the Conjugate Gradient Method Without the Agonizing Pain](http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf) by Jonathan Richard Shewchuk. The method can be seen as an adaptation of a standard Conjugate Gradient method (see, for example http://en.wikipedia.org/wiki/Conjugate_gradient_method) for numerically solving the systems of linear equations.

It should be noted, that this method, although deterministic, is rather a heuristic method and therefore may converge to a local minima, not necessary a global one. What is even more disastrous, most of its behaviour is ruled by gradient, therefore it essentially cannot distinguish between local minima and maxima. Therefore, if it starts sufficiently near to the local maximum, it may converge to it. Another obvious restriction is that it should be possible to compute the gradient of a function at any point, thus it is preferable to have analytic expression for gradient and computational burden should be born by the user.

The latter responsibility is accompilished via the getGradient method of a MinProblemSolver::Function interface (which represents function being optimized). This method takes point a point in n-dimensional space (first argument represents the array of coordinates of that point) and comput its gradient (it should be stored in the second argument as an array).

Note
class ConjGradSolver thus does not add any new methods to the basic MinProblemSolver interface.
term criteria should meet following condition:
termcrit.type == (TermCriteria::MAX_ITER + TermCriteria::EPS) && termcrit.epsilon > 0 && termcrit.maxCount > 0
// or
termcrit.type == TermCriteria::MAX_ITER) && termcrit.maxCount > 0

Member Function Documentation

static Ptr<ConjGradSolver> cv::ConjGradSolver::create ( const Ptr< MinProblemSolver::Function > &  f = PtrConjGradSolver::Function >(),
TermCriteria  termcrit = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5000, 0.000001) 
)
static

This function returns the reference to the ready-to-use ConjGradSolver object.

All the parameters are optional, so this procedure can be called even without parameters at all. In this case, the default values will be used. As default value for terminal criteria are the only sensible ones, MinProblemSolver::setFunction() should be called upon the obtained object, if the function was not given to create(). Otherwise, the two ways (submit it to create() or miss it out and call the MinProblemSolver::setFunction()) are absolutely equivalent (and will drop the same errors in the same way, should invalid input be detected).

Parameters
fPointer to the function that will be minimized, similarly to the one you submit via MinProblemSolver::setFunction.
termcritTerminal criteria to the algorithm, similarly to the one you submit via MinProblemSolver::setTermCriteria.

The documentation for this class was generated from the following file: