tf.contrib.eager.Variable

Class Variable

Defined in tensorflow/python/ops/resource_variable_ops.py.

Variable based on resource handles.

See the Variables How To for a high level overview.

A ResourceVariable allows you to maintain state across subsequent calls to session.run.

The ResourceVariable constructor requires an initial value for the variable, which can be a Tensor of any type and shape. The initial value defines the type and shape of the variable. After construction, the type and shape of the variable are fixed. The value can be changed using one of the assign methods.

Just like any Tensor, variables created with tf.Variable(use_resource=True) can be used as inputs for other Ops in the graph. Additionally, all the operators overloaded for the Tensor class are carried over to variables, so you can also add nodes to the graph by just doing arithmetic on variables.

Unlike ref-based variable, a ResourceVariable has well-defined semantics. Each usage of a ResourceVariable in a TensorFlow graph adds a read_value operation to the graph. The Tensors returned by a read_value operation are guaranteed to see all modifications to the value of the variable which happen in any operation on which the read_value depends on (either directly, indirectly, or via a control dependency) and guaranteed to not see any modification to the value of the variable from operations that depend on the read_value operation. Updates from operations that have no dependency relationship to the read_value operation might or might not be visible to read_value.

For example, if there is more than one assignment to a ResourceVariable in a single session.run call there is a well-defined value for each operation which uses the variable's value if the assignments and the read are connected by edges in the graph. Consider the following example, in which two writes can cause tf.Variable and tf.ResourceVariable to behave differently:

a = tf.Variable(1.0, use_resource=True)
a.initializer.run()

assign = a.assign(2.0)
with tf.control_dependencies([assign]):
  b = a.read_value()
with tf.control_dependencies([b]):
  other_assign = a.assign(3.0)
with tf.control_dependencies([other_assign]):
  # Will print 2.0 because the value was read before other_assign ran. If
  # `a` was a tf.Variable instead, 2.0 or 3.0 could be printed.
  tf.Print(b, [b]).eval()

__init__

__init__(
    initial_value=None,
    trainable=True,
    collections=None,
    validate_shape=True,
    caching_device=None,
    name=None,
    dtype=None,
    variable_def=None,
    import_scope=None,
    constraint=None
)

Creates a variable.

Args:

  • initial_value: A Tensor, or Python object convertible to a Tensor, which is the initial value for the Variable. The initial value must have a shape specified unless validate_shape is set to False. Can also be a callable with no argument that returns the initial value when called. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.)
  • trainable: If True, the default, also adds the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES. This collection is used as the default list of variables to use by the Optimizer classes.
  • collections: List of graph collections keys. The new variable is added to these collections. Defaults to [GraphKeys.GLOBAL_VARIABLES].
  • validate_shape: Ignored. Provided for compatibility with tf.Variable.
  • caching_device: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements.
  • name: Optional name for the variable. Defaults to 'Variable' and gets uniquified automatically.
  • dtype: If set, initial_value will be converted to the given type. If None, either the datatype will be kept (if initial_value is a Tensor) or float32 will be used (if it is a Python object convertible to a Tensor).
  • variable_def: VariableDef protocol buffer. If not None, recreates the ResourceVariable object with its contents. variable_def and other arguments (except for import_scope) are mutually exclusive.
  • import_scope: Optional string. Name scope to add to the ResourceVariable. Only used when variable_def is provided.
  • constraint: An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.

Raises:

  • ValueError: If the initial value is not specified, or does not have a shape and validate_shape is True.

Eager Compatibility

When Eager Execution is enabled, the default for the collections argument is None, which signifies that this Variable will not be added to any collections.

Child Classes

class SaveSliceInfo

Properties

constraint

Returns the constraint function associated with this variable.

Returns:

The constraint function that was passed to the variable constructor. Can be None if no constraint was passed.

create

The op responsible for initializing this variable.

device

The device this variable is on.

dtype

The dtype of this variable.

graph

The Graph of this variable.

handle

The handle by which this variable can be accessed.

initial_value

Returns the Tensor used as the initial value for the variable.

initializer

The op responsible for initializing this variable.

name

The name of the handle for this variable.

op

The op for this variable.

shape

The shape of this variable.

trainable

Methods

tf.contrib.eager.Variable.__abs__

__abs__(
    x,
    name=None
)

Computes the absolute value of a tensor.

Given a tensor x of complex numbers, this operation returns a tensor of type float32 or float64 that is the absolute value of each element in x. All elements in x must be complex numbers of the form \(a + bj\). The absolute value is computed as \( \sqrt{a^2 + b^2}\). For example:

x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])
tf.abs(x)  # [5.25594902, 6.60492229]

Args:

  • x: A Tensor or SparseTensor of type float16, float32, float64, int32, int64, complex64 or complex128.
  • name: A name for the operation (optional).

Returns:

A Tensor or SparseTensor the same size and type as x with absolute values. Note, for complex64 or complex128 input, the returned Tensor will be of type float32 or float64, respectively.

tf.contrib.eager.Variable.__add__

__add__(
    a,
    *args,
    **kwargs
)

Returns x + y element-wise.

NOTE: math.add supports broadcasting. AddN does not. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

tf.contrib.eager.Variable.__and__

__and__(
    a,
    *args,
    **kwargs
)

Returns the truth value of x AND y element-wise.

NOTE: math.logical_and supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor of type bool.
  • y: A Tensor of type bool.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__bool__

__bool__()

tf.contrib.eager.Variable.__deepcopy__

__deepcopy__(memo)

tf.contrib.eager.Variable.__div__

__div__(
    a,
    *args,
    **kwargs
)

Divide two values using Python 2 semantics. Used for Tensor.div.

Args:

  • x: Tensor numerator of real numeric type.
  • y: Tensor denominator of real numeric type.
  • name: A name for the operation (optional).

Returns:

x / y returns the quotient of x and y.

tf.contrib.eager.Variable.__floordiv__

__floordiv__(
    a,
    *args,
    **kwargs
)

Divides x / y elementwise, rounding toward the most negative integer.

The same as tf.div(x,y) for integers, but uses tf.floor(tf.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division.

x and y must have the same type, and the result will have the same type as well.

Args:

  • x: Tensor numerator of real numeric type.
  • y: Tensor denominator of real numeric type.
  • name: A name for the operation (optional).

Returns:

x / y rounded down.

Raises:

  • TypeError: If the inputs are complex.

tf.contrib.eager.Variable.__ge__

__ge__(
    a,
    *args,
    **kwargs
)

Returns the truth value of (x >= y) element-wise.

NOTE: math.greater_equal supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__getitem__

__getitem__(
    var,
    slice_spec
)

Creates a slice helper object given a variable.

This allows creating a sub-tensor from part of the current contents of a variable. See tf.Tensor.getitem for detailed examples of slicing.

This function in addition also allows assignment to a sliced range. This is similar to __setitem__ functionality in Python. However, the syntax is different so that the user can capture the assignment operation for grouping or passing to sess.run(). For example,

import tensorflow as tf
A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32)
with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  print(sess.run(A[:2, :2]))  # => [[1,2], [4,5]]

  op = A[:2,:2].assign(22. * tf.ones((2, 2)))
  print(sess.run(op))  # => [[22, 22, 3], [22, 22, 6], [7,8,9]]

Note that assignments currently do not support NumPy broadcasting semantics.

Args:

  • var: An ops.Variable object.
  • slice_spec: The arguments to Tensor.__getitem__.

Returns:

The appropriate slice of "tensor", based on "slice_spec". As an operator. The operator also has a assign() method that can be used to generate an assignment operator.

Raises:

  • ValueError: If a slice range is negative size.
  • TypeError: TypeError: If the slice indices aren't int, slice, ellipsis, tf.newaxis or int32/int64 tensors.

tf.contrib.eager.Variable.__gt__

__gt__(
    a,
    *args,
    **kwargs
)

Returns the truth value of (x > y) element-wise.

NOTE: math.greater supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__iadd__

__iadd__(unused_other)

tf.contrib.eager.Variable.__idiv__

__idiv__(unused_other)

tf.contrib.eager.Variable.__imul__

__imul__(unused_other)

tf.contrib.eager.Variable.__int__

__int__()

tf.contrib.eager.Variable.__invert__

__invert__(
    a,
    *args,
    **kwargs
)

Returns the truth value of NOT x element-wise.

Args:

  • x: A Tensor of type bool.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__ipow__

__ipow__(unused_other)

tf.contrib.eager.Variable.__irealdiv__

__irealdiv__(unused_other)

tf.contrib.eager.Variable.__isub__

__isub__(unused_other)

tf.contrib.eager.Variable.__iter__

__iter__()

Dummy method to prevent iteration. Do not call.

NOTE(mrry): If we register getitem as an overloaded operator, Python will valiantly attempt to iterate over the variable's Tensor from 0 to infinity. Declaring this method prevents this unintended behavior.

Raises:

  • TypeError: when invoked.

tf.contrib.eager.Variable.__itruediv__

__itruediv__(unused_other)

tf.contrib.eager.Variable.__le__

__le__(
    a,
    *args,
    **kwargs
)

Returns the truth value of (x <= y) element-wise.

NOTE: math.less_equal supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__lt__

__lt__(
    a,
    *args,
    **kwargs
)

Returns the truth value of (x < y) element-wise.

NOTE: math.less supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__matmul__

__matmul__(
    a,
    *args,
    **kwargs
)

Multiplies matrix a by matrix b, producing a * b.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication arguments, and any further outer dimensions match.

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.

For example:

# 2-D tensor `a`
# [[1, 2, 3],
#  [4, 5, 6]]
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])

# 2-D tensor `b`
# [[ 7,  8],
#  [ 9, 10],
#  [11, 12]]
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])

# `a` * `b`
# [[ 58,  64],
#  [139, 154]]
c = tf.matmul(a, b)


# 3-D tensor `a`
# [[[ 1,  2,  3],
#   [ 4,  5,  6]],
#  [[ 7,  8,  9],
#   [10, 11, 12]]]
a = tf.constant(np.arange(1, 13, dtype=np.int32),
                shape=[2, 2, 3])

# 3-D tensor `b`
# [[[13, 14],
#   [15, 16],
#   [17, 18]],
#  [[19, 20],
#   [21, 22],
#   [23, 24]]]
b = tf.constant(np.arange(13, 25, dtype=np.int32),
                shape=[2, 3, 2])

# `a` * `b`
# [[[ 94, 100],
#   [229, 244]],
#  [[508, 532],
#   [697, 730]]]
c = tf.matmul(a, b)

# Since python >= 3.5 the @ operator is supported (see PEP 465).
# In TensorFlow, it simply calls the `tf.matmul()` function, so the
# following lines are equivalent:
d = a @ b @ [[10.], [11.]]
d = tf.matmul(tf.matmul(a, b), [[10.], [11.]])

Args:

  • a: Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1.
  • b: Tensor with same type and rank as a.
  • transpose_a: If True, a is transposed before multiplication.
  • transpose_b: If True, b is transposed before multiplication.
  • adjoint_a: If True, a is conjugated and transposed before multiplication.
  • adjoint_b: If True, b is conjugated and transposed before multiplication.
  • a_is_sparse: If True, a is treated as a sparse matrix.
  • b_is_sparse: If True, b is treated as a sparse matrix.
  • name: Name for the operation (optional).

Returns:

A Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

  • Note: This is matrix product, not element-wise product.

Raises:

  • ValueError: If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.

tf.contrib.eager.Variable.__mod__

__mod__(
    a,
    *args,
    **kwargs
)

Returns element-wise remainder of division. When x < 0 xor y < 0 is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x.

NOTE: floormod supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: int32, int64, bfloat16, half, float32, float64.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

tf.contrib.eager.Variable.__mul__

__mul__(
    a,
    *args,
    **kwargs
)

Dispatches cwise mul for "DenseDense" and "DenseSparse".

tf.contrib.eager.Variable.__neg__

__neg__(
    a,
    *args,
    **kwargs
)

Computes numerical negative value element-wise.

I.e., \(y = -x\).

Args:

  • x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

tf.contrib.eager.Variable.__nonzero__

__nonzero__()

tf.contrib.eager.Variable.__or__

__or__(
    a,
    *args,
    **kwargs
)

Returns the truth value of x OR y element-wise.

NOTE: math.logical_or supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor of type bool.
  • y: A Tensor of type bool.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__pow__

__pow__(
    a,
    *args,
    **kwargs
)

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example:

x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y)  # [[256, 65536], [9, 27]]

Args:

  • x: A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
  • y: A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
  • name: A name for the operation (optional).

Returns:

A Tensor.

tf.contrib.eager.Variable.__radd__

__radd__(
    a,
    *args,
    **kwargs
)

Returns x + y element-wise.

NOTE: math.add supports broadcasting. AddN does not. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

tf.contrib.eager.Variable.__rand__

__rand__(
    a,
    *args,
    **kwargs
)

Returns the truth value of x AND y element-wise.

NOTE: math.logical_and supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor of type bool.
  • y: A Tensor of type bool.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__rdiv__

__rdiv__(
    a,
    *args,
    **kwargs
)

Divide two values using Python 2 semantics. Used for Tensor.div.

Args:

  • x: Tensor numerator of real numeric type.
  • y: Tensor denominator of real numeric type.
  • name: A name for the operation (optional).

Returns:

x / y returns the quotient of x and y.

tf.contrib.eager.Variable.__rfloordiv__

__rfloordiv__(
    a,
    *args,
    **kwargs
)

Divides x / y elementwise, rounding toward the most negative integer.

The same as tf.div(x,y) for integers, but uses tf.floor(tf.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division.

x and y must have the same type, and the result will have the same type as well.

Args:

  • x: Tensor numerator of real numeric type.
  • y: Tensor denominator of real numeric type.
  • name: A name for the operation (optional).

Returns:

x / y rounded down.

Raises:

  • TypeError: If the inputs are complex.

tf.contrib.eager.Variable.__rmatmul__

__rmatmul__(
    a,
    *args,
    **kwargs
)

Multiplies matrix a by matrix b, producing a * b.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication arguments, and any further outer dimensions match.

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.

For example:

# 2-D tensor `a`
# [[1, 2, 3],
#  [4, 5, 6]]
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])

# 2-D tensor `b`
# [[ 7,  8],
#  [ 9, 10],
#  [11, 12]]
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])

# `a` * `b`
# [[ 58,  64],
#  [139, 154]]
c = tf.matmul(a, b)


# 3-D tensor `a`
# [[[ 1,  2,  3],
#   [ 4,  5,  6]],
#  [[ 7,  8,  9],
#   [10, 11, 12]]]
a = tf.constant(np.arange(1, 13, dtype=np.int32),
                shape=[2, 2, 3])

# 3-D tensor `b`
# [[[13, 14],
#   [15, 16],
#   [17, 18]],
#  [[19, 20],
#   [21, 22],
#   [23, 24]]]
b = tf.constant(np.arange(13, 25, dtype=np.int32),
                shape=[2, 3, 2])

# `a` * `b`
# [[[ 94, 100],
#   [229, 244]],
#  [[508, 532],
#   [697, 730]]]
c = tf.matmul(a, b)

# Since python >= 3.5 the @ operator is supported (see PEP 465).
# In TensorFlow, it simply calls the `tf.matmul()` function, so the
# following lines are equivalent:
d = a @ b @ [[10.], [11.]]
d = tf.matmul(tf.matmul(a, b), [[10.], [11.]])

Args:

  • a: Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1.
  • b: Tensor with same type and rank as a.
  • transpose_a: If True, a is transposed before multiplication.
  • transpose_b: If True, b is transposed before multiplication.
  • adjoint_a: If True, a is conjugated and transposed before multiplication.
  • adjoint_b: If True, b is conjugated and transposed before multiplication.
  • a_is_sparse: If True, a is treated as a sparse matrix.
  • b_is_sparse: If True, b is treated as a sparse matrix.
  • name: Name for the operation (optional).

Returns:

A Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

  • Note: This is matrix product, not element-wise product.

Raises:

  • ValueError: If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.

tf.contrib.eager.Variable.__rmod__

__rmod__(
    a,
    *args,
    **kwargs
)

Returns element-wise remainder of division. When x < 0 xor y < 0 is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x.

NOTE: floormod supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: int32, int64, bfloat16, half, float32, float64.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

tf.contrib.eager.Variable.__rmul__

__rmul__(
    a,
    *args,
    **kwargs
)

Dispatches cwise mul for "DenseDense" and "DenseSparse".

tf.contrib.eager.Variable.__ror__

__ror__(
    a,
    *args,
    **kwargs
)

Returns the truth value of x OR y element-wise.

NOTE: math.logical_or supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor of type bool.
  • y: A Tensor of type bool.
  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.__rpow__

__rpow__(
    a,
    *args,
    **kwargs
)

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example:

x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y)  # [[256, 65536], [9, 27]]

Args:

  • x: A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
  • y: A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
  • name: A name for the operation (optional).

Returns:

A Tensor.

tf.contrib.eager.Variable.__rsub__

__rsub__(
    a,
    *args,
    **kwargs
)

Returns x - y element-wise.

NOTE: Subtract supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

tf.contrib.eager.Variable.__rtruediv__

__rtruediv__(
    a,
    *args,
    **kwargs
)

tf.contrib.eager.Variable.__rxor__

__rxor__(
    a,
    *args,
    **kwargs
)

x ^ y = (x | y) & ~(x & y).

tf.contrib.eager.Variable.__sub__

__sub__(
    a,
    *args,
    **kwargs
)

Returns x - y element-wise.

NOTE: Subtract supports broadcasting. More about broadcasting here

Args:

  • x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.
  • y: A Tensor. Must have the same type as x.
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

tf.contrib.eager.Variable.__truediv__

__truediv__(
    a,
    *args,
    **kwargs
)

tf.contrib.eager.Variable.__xor__

__xor__(
    a,
    *args,
    **kwargs
)

x ^ y = (x | y) & ~(x & y).

tf.contrib.eager.Variable.assign

assign(
    value,
    use_locking=None,
    name=None,
    read_value=True
)

Assigns a new value to this variable.

Args:

  • value: A Tensor. The new value for this variable.
  • use_locking: If True, use locking during the assignment.
  • name: The name to use for the assignment.
  • read_value: A bool. Whether to read and return the new value of the variable or not.

Returns:

If read_value is True, this method will return the new value of the variable after the assignment has completed. Otherwise, when in graph mode it will return the Operation that does the assignment, and when in eager mode it will return None.

tf.contrib.eager.Variable.assign_add

assign_add(
    delta,
    use_locking=None,
    name=None,
    read_value=True
)

Adds a value to this variable.

Args:

  • delta: A Tensor. The value to add to this variable.
  • use_locking: If True, use locking during the operation.
  • name: The name to use for the operation.
  • read_value: A bool. Whether to read and return the new value of the variable or not.

Returns:

If read_value is True, this method will return the new value of the variable after the assignment has completed. Otherwise, when in graph mode it will return the Operation that does the assignment, and when in eager mode it will return None.

tf.contrib.eager.Variable.assign_sub

assign_sub(
    delta,
    use_locking=None,
    name=None,
    read_value=True
)

Subtracts a value from this variable.

Args:

  • delta: A Tensor. The value to subtract from this variable.
  • use_locking: If True, use locking during the operation.
  • name: The name to use for the operation.
  • read_value: A bool. Whether to read and return the new value of the variable or not.

Returns:

If read_value is True, this method will return the new value of the variable after the assignment has completed. Otherwise, when in graph mode it will return the Operation that does the assignment, and when in eager mode it will return None.

tf.contrib.eager.Variable.batch_scatter_update

batch_scatter_update(
    sparse_delta,
    use_locking=False,
    name=None
)

Assigns IndexedSlices to this variable batch-wise.

Analogous to batch_gather. This assumes that this variable and the sparse_delta IndexedSlices have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

num_prefix_dims = sparse_delta.indices.ndims - 1 batch_dim = num_prefix_dims + 1 sparse_delta.updates.shape = sparse_delta.indices.shape + var.shape[ batch_dim:]

where

sparse_delta.updates.shape[:num_prefix_dims] == sparse_delta.indices.shape[:num_prefix_dims] == var.shape[:num_prefix_dims]

And the operation performed can be expressed as:

var[i_1, ..., i_n, sparse_delta.indices[i_1, ..., i_n, j]] = sparse_delta.updates[ i_1, ..., i_n, j]

When sparse_delta.indices is a 1D tensor, this operation is equivalent to scatter_update.

To avoid this operation one can looping over the first ndims of the variable and using scatter_update on the subtensors that result of slicing the first dimension. This is a valid option for ndims = 1, but less efficient than this implementation.

Args:

  • sparse_delta: IndexedSlices to be assigned to this variable.
  • use_locking: If True, use locking during the operation.
  • name: the name of the operation.

Returns:

A Tensor that will hold the new value of this variable after the scattered subtraction has completed.

Raises:

  • ValueError: if sparse_delta is not an IndexedSlices.

tf.contrib.eager.Variable.count_up_to

count_up_to(limit)

Increments this variable until it reaches limit. (deprecated)

When that Op is run it tries to increment the variable by 1. If incrementing the variable would bring it above limit then the Op raises the exception OutOfRangeError.

If no error is raised, the Op outputs the value of the variable before the increment.

This is essentially a shortcut for count_up_to(self, limit).

Args:

  • limit: value at which incrementing the variable raises an error.

Returns:

A Tensor that will hold the variable value before the increment. If no other Op modifies this variable, the values produced will all be distinct.

tf.contrib.eager.Variable.eval

eval(session=None)

Evaluates and returns the value of this variable.

tf.contrib.eager.Variable.from_proto

@staticmethod
from_proto(
    variable_def,
    import_scope=None
)

Returns a Variable object created from variable_def.

tf.contrib.eager.Variable.get_shape

get_shape()

Alias of Variable.shape.

tf.contrib.eager.Variable.initialized_value

initialized_value()

Returns the value of the initialized variable.

You should use this instead of the variable itself to initialize another variable with a value that depends on the value of this variable.

# Initialize 'v' with a random tensor.
v = tf.Variable(tf.truncated_normal([10, 40]))
# Use `initialized_value` to guarantee that `v` has been
# initialized before its value is used to initialize `w`.
# The random values are picked only once.
w = tf.Variable(v.initialized_value() * 2.0)

Returns:

A Tensor holding the value of this variable after its initializer has run.

tf.contrib.eager.Variable.is_initialized

is_initialized(name=None)

Checks whether a resource variable has been initialized.

Outputs boolean scalar indicating whether the tensor has been initialized.

Args:

  • name: A name for the operation (optional).

Returns:

A Tensor of type bool.

tf.contrib.eager.Variable.load

load(
    value,
    session=None
)

Load new value into this variable.

Writes new value to variable's memory. Doesn't add ops to the graph.

This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used. See tf.Session for more information on launching a graph and on sessions.

v = tf.Variable([1, 2])
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    # Usage passing the session explicitly.
    v.load([2, 3], sess)
    print(v.eval(sess)) # prints [2 3]
    # Usage with the default session.  The 'with' block
    # above makes 'sess' the default session.
    v.load([3, 4], sess)
    print(v.eval()) # prints [3 4]

Args:

  • value: New variable value
  • session: The session to use to evaluate this variable. If none, the default session is used.

Raises:

  • ValueError: Session is not passed and no default session

tf.contrib.eager.Variable.numpy

numpy()

tf.contrib.eager.Variable.read_value

read_value()

Constructs an op which reads the value of this variable.

Should be used when there are multiple reads, or when it is desirable to read the value only after some condition is true.

Returns:

the read operation.

tf.contrib.eager.Variable.scatter_add

scatter_add(
    sparse_delta,
    use_locking=False,
    name=None
)

Adds IndexedSlices from this variable.

Args:

  • sparse_delta: IndexedSlices to be added to this variable.
  • use_locking: If True, use locking during the operation.
  • name: the name of the operation.

Returns:

A Tensor that will hold the new value of this variable after the scattered subtraction has completed.

Raises:

  • ValueError: if sparse_delta is not an IndexedSlices.

tf.contrib.eager.Variable.scatter_nd_add

scatter_nd_add(
    indices,
    updates,
    name=None
)

Applies sparse addition to individual values or slices in a Variable.

ref is a Tensor with rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.

updates is Tensor of rank Q-1+P-K with shape:

[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].

For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:

    ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
    indices = tf.constant([[4], [3], [1] ,[7]])
    updates = tf.constant([9, 10, 11, 12])
    add = ref.scatter_nd_add(indices, updates)
    with tf.Session() as sess:
      print sess.run(add)

The resulting update to ref would look like this:

[1, 13, 3, 14, 14, 6, 7, 20]

See tf.scatter_nd for more details about how to make updates to slices.

Args:

  • indices: The indices to be used in the operation.
  • updates: The values to be used in the operation.
  • name: the name of the operation.

Returns:

A Tensor that will hold the new value of this variable after the scattered subtraction has completed.

Raises:

  • ValueError: if sparse_delta is not an IndexedSlices.

tf.contrib.eager.Variable.scatter_nd_sub

scatter_nd_sub(
    indices,
    updates,
    name=None
)

Applies sparse subtraction to individual values or slices in a Variable.

ref is a Tensor with rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.

updates is Tensor of rank Q-1+P-K with shape:

[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].

For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:

    ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
    indices = tf.constant([[4], [3], [1] ,[7]])
    updates = tf.constant([9, 10, 11, 12])
    op = ref.scatter_nd_sub(indices, updates)
    with tf.Session() as sess:
      print sess.run(op)

The resulting update to ref would look like this:

[1, -9, 3, -6, -6, 6, 7, -4]

See tf.scatter_nd for more details about how to make updates to slices.

Args:

  • indices: The indices to be used in the operation.
  • updates: The values to be used in the operation.
  • name: the name of the operation.

Returns:

A Tensor that will hold the new value of this variable after the scattered subtraction has completed.

Raises:

  • ValueError: if sparse_delta is not an IndexedSlices.

tf.contrib.eager.Variable.scatter_nd_update

scatter_nd_update(
    indices,
    updates,
    name=None
)

Applies sparse assignment to individual values or slices in a Variable.

ref is a Tensor with rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.

updates is Tensor of rank Q-1+P-K with shape:

[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].

For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:

    ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
    indices = tf.constant([[4], [3], [1] ,[7]])
    updates = tf.constant([9, 10, 11, 12])
    op = ref.scatter_nd_update(indices, updates)
    with tf.Session() as sess:
      print sess.run(op)

The resulting update to ref would look like this:

[1, 11, 3, 10, 9, 6, 7, 12]

See tf.scatter_nd for more details about how to make updates to slices.

Args:

  • indices: The indices to be used in the operation.
  • updates: The values to be used in the operation.
  • name: the name of the operation.

Returns:

A Tensor that will hold the new value of this variable after the scattered subtraction has completed.

Raises:

  • ValueError: if sparse_delta is not an IndexedSlices.

tf.contrib.eager.Variable.scatter_sub

scatter_sub(
    sparse_delta,
    use_locking=False,
    name=None
)

Subtracts IndexedSlices from this variable.

Args:

  • sparse_delta: IndexedSlices to be subtracted from this variable.
  • use_locking: If True, use locking during the operation.
  • name: the name of the operation.

Returns:

A Tensor that will hold the new value of this variable after the scattered subtraction has completed.

Raises:

  • ValueError: if sparse_delta is not an IndexedSlices.

tf.contrib.eager.Variable.scatter_update

scatter_update(
    sparse_delta,
    use_locking=False,
    name=None
)

Assigns IndexedSlices to this variable.

Args:

  • sparse_delta: IndexedSlices to be assigned to this variable.
  • use_locking: If True, use locking during the operation.
  • name: the name of the operation.

Returns:

A Tensor that will hold the new value of this variable after the scattered subtraction has completed.

Raises:

  • ValueError: if sparse_delta is not an IndexedSlices.

tf.contrib.eager.Variable.set_shape

set_shape(shape)

Unsupported.

tf.contrib.eager.Variable.sparse_read

sparse_read(
    indices,
    name=None
)

Reads the value of this variable sparsely, using gather.

tf.contrib.eager.Variable.to_proto

to_proto(export_scope=None)

Converts a ResourceVariable to a VariableDef protocol buffer.

Args:

  • export_scope: Optional string. Name scope to remove.

Raises:

  • RuntimeError: If run in EAGER mode.

Returns:

A VariableDef protocol buffer, or None if the Variable is not in the specified name scope.

tf.contrib.eager.Variable.value

value()

A cached operation which reads the value of this variable.

Class Members

__array_priority__