tf.compat.v1.get_local_variable

View source on GitHub

Gets an existing local variable or creates a new one.

tf.compat.v1.get_local_variable(
    name, shape=None, dtype=None, initializer=None, regularizer=None,
    trainable=False, collections=None, caching_device=None, partitioner=None,
    validate_shape=True, use_resource=None, custom_getter=None, constraint=None,
    synchronization=tf.VariableSynchronization.AUTO,
    aggregation=tf.compat.v1.VariableAggregation.NONE
)

Behavior is the same as in get_variable, except that variables are added to the LOCAL_VARIABLES collection and trainable is set to False. This function prefixes the name with the current variable scope and performs reuse checks. See the Variable Scope How To for an extensive description of how reusing works. Here is a basic example:

def foo():
  with tf.variable_scope("foo", reuse=tf.AUTO_REUSE):
    v = tf.get_variable("v", [1])
  return v

v1 = foo()  # Creates v.
v2 = foo()  # Gets the same, existing v.
assert v1 == v2

If initializer is None (the default), the default initializer passed in the variable scope will be used. If that one is None too, a glorot_uniform_initializer will be used. The initializer can also be a Tensor, in which case the variable is initialized to this value and shape.

Similarly, if the regularizer is None (the default), the default regularizer passed in the variable scope will be used (if that is None too, then by default no regularization is performed).

If a partitioner is provided, a PartitionedVariable is returned. Accessing this object as a Tensor returns the shards concatenated along the partition axis.

Some useful partitioners are available. See, e.g., variable_axis_size_partitioner and min_max_variable_partitioner.

Args:

Returns:

The created or existing Variable (or PartitionedVariable, if a partitioner was used).

Raises: