Class AdditiveSwapRegretOptimizer
Defined in tensorflow/contrib/constrained_optimization/python/swap_regret_optimizer.py
.
A ConstrainedOptimizer
based on swap-regret minimization.
This ConstrainedOptimizer
uses the given tf.train.Optimizer
s to jointly
minimize over the model parameters, and maximize over constraint/objective
weight matrix (the analogue of Lagrange multipliers), with the latter
maximization using additive updates and an algorithm that minimizes swap
regret.
For more specifics, please refer to:
Cotter, Jiang and Sridharan. "Two-Player Games for Efficient Non-Convex Constrained Optimization". https://arxiv.org/abs/1804.06500
The formulation used by this optimizer can be found in Definition 2, and is
discussed in Section 4. It is most similar to Algorithm 2 in Section 4, with
the differences being that it uses tf.train.Optimizer
s, instead of SGD, for
the "inner" updates, and performs additive (instead of multiplicative) updates
of the stochastic matrix.
__init__
__init__(
optimizer,
constraint_optimizer=None
)
Constructs a new AdditiveSwapRegretOptimizer
.
Args:
optimizer
: tf.train.Optimizer, used to optimize the objective and proxy_constraints portion of ConstrainedMinimizationProblem. If constraint_optimizer is not provided, this will also be used to optimize the Lagrange multiplier analogues.constraint_optimizer
: optional tf.train.Optimizer, used to optimize the Lagrange multiplier analogues.
Returns:
A new AdditiveSwapRegretOptimizer
.
Properties
constraint_optimizer
Returns the tf.train.Optimizer
used for the matrix.
optimizer
Returns the tf.train.Optimizer
used for optimization.
Methods
tf.contrib.constrained_optimization.AdditiveSwapRegretOptimizer.minimize
minimize(
minimization_problem,
unconstrained_steps=None,
global_step=None,
var_list=None,
gate_gradients=train_optimizer.Optimizer.GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
name=None,
grad_loss=None
)
Returns an Operation
for minimizing the constrained problem.
This method combines the functionality of minimize_unconstrained
and
minimize_constrained
. If global_step < unconstrained_steps, it will
perform an unconstrained update, and if global_step >= unconstrained_steps,
it will perform a constrained update.
The reason for this functionality is that it may be best to initialize the constrained optimizer with an approximate optimum of the unconstrained problem.
Args:
minimization_problem
: ConstrainedMinimizationProblem, the problem to optimize.unconstrained_steps
: int, number of steps for which we should perform unconstrained updates, before transitioning to constrained updates.global_step
: as intf.train.Optimizer
'sminimize
method.var_list
: as intf.train.Optimizer
'sminimize
method.gate_gradients
: as intf.train.Optimizer
'sminimize
method.aggregation_method
: as intf.train.Optimizer
'sminimize
method.colocate_gradients_with_ops
: as intf.train.Optimizer
'sminimize
method.name
: as intf.train.Optimizer
'sminimize
method.grad_loss
: as intf.train.Optimizer
'sminimize
method.
Returns:
Operation
, the train_op.
Raises:
ValueError
: If unconstrained_steps is provided, but global_step is not.
tf.contrib.constrained_optimization.AdditiveSwapRegretOptimizer.minimize_constrained
minimize_constrained(
minimization_problem,
global_step=None,
var_list=None,
gate_gradients=train_optimizer.Optimizer.GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
name=None,
grad_loss=None
)
Returns an Operation
for minimizing the constrained problem.
Unlike minimize_unconstrained
, this function attempts to find a solution
that minimizes the objective
portion of the minimization problem while
satisfying the constraints
portion.
Args:
minimization_problem
: ConstrainedMinimizationProblem, the problem to optimize.global_step
: as intf.train.Optimizer
'sminimize
method.var_list
: as intf.train.Optimizer
'sminimize
method.gate_gradients
: as intf.train.Optimizer
'sminimize
method.aggregation_method
: as intf.train.Optimizer
'sminimize
method.colocate_gradients_with_ops
: as intf.train.Optimizer
'sminimize
method.name
: as intf.train.Optimizer
'sminimize
method.grad_loss
: as intf.train.Optimizer
'sminimize
method.
Returns:
Operation
, the train_op.
tf.contrib.constrained_optimization.AdditiveSwapRegretOptimizer.minimize_unconstrained
minimize_unconstrained(
minimization_problem,
global_step=None,
var_list=None,
gate_gradients=train_optimizer.Optimizer.GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
name=None,
grad_loss=None
)
Returns an Operation
for minimizing the unconstrained problem.
Unlike minimize_constrained
, this function ignores the constraints
(and
proxy_constraints
) portion of the minimization problem entirely, and only
minimizes objective
.
Args:
minimization_problem
: ConstrainedMinimizationProblem, the problem to optimize.global_step
: as intf.train.Optimizer
'sminimize
method.var_list
: as intf.train.Optimizer
'sminimize
method.gate_gradients
: as intf.train.Optimizer
'sminimize
method.aggregation_method
: as intf.train.Optimizer
'sminimize
method.colocate_gradients_with_ops
: as intf.train.Optimizer
'sminimize
method.name
: as intf.train.Optimizer
'sminimize
method.grad_loss
: as intf.train.Optimizer
'sminimize
method.
Returns:
Operation
, the train_op.