Aliases:
tf.contrib.eager.custom_gradienttf.custom_gradient
tf.custom_gradient(f)
Defined in tensorflow/python/ops/custom_gradient.py.
Decorator to define a function with a custom gradient.
This decorator allows fine grained control over the gradients of a sequence for operations. This may be useful for multiple reasons, including providing a more efficient or numerically stable gradient for a sequence of operations.
For example, consider the following function that commonly occurs in the computation of cross entropy and log likelihoods:
def log1pexp(x):
return tf.log(1 + tf.exp(x))
Due to numerical instability, the gradient this function evaluated at x=100 is NaN. For example:
x = tf.constant(100.)
y = log1pexp(x)
dy = tf.gradients(y, x) # Will be NaN when evaluated.
The gradient expression can be analytically simplified to provide numerical stability:
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
With this definition, the gradient at x=100 will be correctly evaluated as 1.0.
See also tf.RegisterGradient which registers a gradient function for a
primitive TensorFlow operation. tf.custom_gradient on the other hand allows
for fine grained control over the gradient computation of a sequence of
operations.
Note that if the decorated function uses Variables, the enclosing variable
scope must be using ResourceVariables.
Args:
f: functionf(*x)that returns a tuple(y, grad_fn)where:xis a sequence ofTensorinputs to the function.yis aTensoror sequence ofTensoroutputs of applying TensorFlow operations inftox.grad_fnis a function with the signatureg(*grad_ys)which returns a list ofTensors - the derivatives ofTensors inywith respect to theTensors inx.grad_ysis aTensoror sequence ofTensors the same size asyholding the initial value gradients for eachTensoriny. In a pure mathematical sense, a vector-argument vector-valued functionf's derivatives should be its Jacobian matrixJ. Here we are expressing the JacobianJas a functiongrad_fnwhich defines howJwill transform a vectorgrad_yswhen left-multiplied with it (grad_ys * J). This functional representation of a matrix is convenient to use for chain-rule calculation (in e.g. the back-propagation algorithm).If
fusesVariables (that are not part of the inputs), i.e. throughget_variable, thengrad_fnshould have signatureg(*grad_ys, variables=None), wherevariablesis a list of theVariables, and return a 2-tuple(grad_xs, grad_vars), wheregrad_xsis the same as above, andgrad_varsis alist<Tensor>with the derivatives ofTensors inywith respect to the variables (that is, grad_vars has one Tensor per variable in variables).
Returns:
A function h(x) which returns the same value as f(x)[0] and whose
gradient (as calculated by tf.gradients) is determined by f(x)[1].