tf.contrib.layers.recompute_grad(
*args,
**kwargs
)
Defined in tensorflow/contrib/layers/python/layers/rev_block_lib.py
.
Decorator that recomputes the function on the backwards pass.
To use this function, you must use ResourceVariable
s (i.e.
`variable_scope(name, use_resource=True), which are the default in Eager mode
and when running on TPU.
Args:
fn
: a function that takes Tensors (all as positional arguments) and returns a tuple of Tensors. Note thatfn
should not close over any other Tensors or Variables.use_data_dep
:bool
, ifTrue
will use a dummy data dependency to force the recompute to happen. IfFalse
will use a control dependency. By default will beTrue
if in an XLA context andFalse
otherwise. XLA ignores control dependencies and so this data dependency is necessary.tupleize_grads
:bool
, ifTrue
will use control dependencies to ensure that all gradients are produced before any are consumed by downstream ops. Ifuse_data_dep
is alsoTrue
, will use a data dependency instead of a control dependency.
Returns:
A wrapped fn that is identical to fn when called, but its activations will be discarded and recomputed on the backwards pass (i.e. on a call to tf.gradients).
Raises:
ValueError
: iffn
closes over any Tensors or Variables.