Class ExternalOptimizerInterface
Defined in tensorflow/contrib/opt/python/training/external_optimizer.py.
Base class for interfaces with external optimization algorithms.
Subclass this and implement _minimize in order to wrap a new optimization
algorithm.
ExternalOptimizerInterface should not be instantiated directly; instead use
e.g. ScipyOptimizerInterface.
__init__
__init__(
loss,
var_list=None,
equalities=None,
inequalities=None,
var_to_bounds=None,
**optimizer_kwargs
)
Initialize a new interface instance.
Args:
loss: A scalarTensorto be minimized.var_list: OptionallistofVariableobjects to update to minimizeloss. Defaults to the list of variables collected in the graph under the keyGraphKeys.TRAINABLE_VARIABLES.equalities: Optionallistof equality constraint scalarTensors to be held equal to zero.inequalities: Optionallistof inequality constraint scalarTensors to be held nonnegative.var_to_bounds: Optionaldictwhere each key is an optimizationVariableand each corresponding value is a length-2 tuple of(low, high)bounds. Although enforcing this kind of simple constraint could be accomplished with theinequalitiesarg, not all optimization algorithms support general inequality constraints, e.g. L-BFGS-B. Bothlowandhighcan either be numbers or anything convertible to a NumPy array that can be broadcast to the shape ofvar(usingnp.broadcast_to). To indicate that there is no bound, useNone(or+/- np.infty). For example, ifvaris a 2x3 matrix, then any of the following correspondingboundscould be supplied:(0, np.infty): Each element ofvarheld positive.(-np.infty, [1, 2]): First column less than 1, second column less than 2.(-np.infty, [[1], [2], [3]]): First row less than 1, second row less than 2, etc.(-np.infty, [[1, 2, 3], [4, 5, 6]]): Entryvar[0, 0]less than 1,var[0, 1]less than 2, etc.
**optimizer_kwargs: Other subclass-specific keyword arguments.
Methods
tf.contrib.opt.ExternalOptimizerInterface.minimize
minimize(
session=None,
feed_dict=None,
fetches=None,
step_callback=None,
loss_callback=None,
**run_kwargs
)
Minimize a scalar Tensor.
Variables subject to optimization are updated in-place at the end of optimization.
Note that this method does not just return a minimization Op, unlike
Optimizer.minimize(); instead it actually performs minimization by
executing commands to control a Session.
Args:
session: ASessioninstance.feed_dict: A feed dict to be passed to calls tosession.run.fetches: A list ofTensors to fetch and supply toloss_callbackas positional arguments.step_callback: A function to be called at each optimization step; arguments are the current values of all optimization variables flattened into a single vector.loss_callback: A function to be called every time the loss and gradients are computed, with evaluated fetches supplied as positional arguments.**run_kwargs: kwargs to pass tosession.run.