tf.Tensor

View source on GitHub

Represents one of the outputs of an Operation.

tf.Tensor(
    op, value_index, dtype
)

A Tensor is a symbolic handle to one of the outputs of an Operation. It does not hold the values of that operation's output, but instead provides a means of computing those values in a TensorFlow tf.compat.v1.Session.

This class has two primary purposes:

  1. A Tensor can be passed as an input to another Operation. This builds a dataflow connection between operations, which enables TensorFlow to execute an entire Graph that represents a large, multi-step computation.

  2. After the graph has been launched in a session, the value of the Tensor can be computed by passing it to tf.Session.run. t.eval() is a shortcut for calling tf.compat.v1.get_default_session().run(t).

In the following example, c, d, and e are symbolic Tensor objects, whereas result is a numpy array that stores a concrete value:

# Build a dataflow graph.
c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
e = tf.matmul(c, d)

# Construct a `Session` to execute the graph.
sess = tf.compat.v1.Session()

# Execute the graph and store the value that `e` represents in `result`.
result = sess.run(e)

Args:

Attributes:

Raises:

Methods

__abs__

View source

__abs__(
    x, name=None
)

Computes the absolute value of a tensor.

Given a tensor of integer or floating-point values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input.

Given a tensor x of complex numbers, this operation returns a tensor of type float32 or float64 that is the absolute value of each element in x. All elements in x must be complex numbers of the form \(a + bj\). The absolute value is computed as \( \sqrt{a2 + b2}\). For example: python x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) tf.abs(x) # [5.25594902, 6.60492229]

Args:

Returns:

A Tensor or SparseTensor the same size, type, and sparsity as x with absolute values. Note, for complex64 or complex128 input, the returned Tensor will be of type float32 or float64, respectively.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)

__add__

View source

__add__(
    x, y
)

Dispatches to add for strings and add_v2 for all other types.

__and__

View source

__and__(
    x, y
)

Returns the truth value of x AND y element-wise.

NOTE: math.logical_and supports broadcasting. More about broadcasting here

Args:

Returns:

A Tensor of type bool.

__bool__

View source

__bool__()

Dummy method to prevent a tensor from being used as a Python bool.

This overload raises a TypeError when the user inadvertently treats a Tensor as a boolean (most commonly in an if or while statement), in code that was not converted by AutoGraph. For example:

if tf.constant(True):  # Will raise.
  # ...

if tf.constant(5) < tf.constant(7):  # Will raise.
  # ...

Raises:

TypeError.

__div__

View source

__div__(
    x, y
)

Divide two values using Python 2 semantics.

Used for Tensor.__div__.

Args:

Returns:

x / y returns the quotient of x and y.

__eq__

View source

__eq__(
    other
)

Compares two tensors element-wise for equality.

__floordiv__

View source

__floordiv__(
    x, y
)

Divides x / y elementwise, rounding toward the most negative integer.

The same as tf.compat.v1.div(x,y) for integers, but uses tf.floor(tf.compat.v1.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division.

x and y must have the same type, and the result will have the same type as well.

Args:

Returns:

x / y rounded down.

Raises:

__ge__

__ge__(
    x, y, name=None
)

Returns the truth value of (x >= y) element-wise.

NOTE: math.greater_equal supports broadcasting. More about broadcasting here

Example:

x = tf.constant([5, 4, 6, 7])
y = tf.constant([5, 2, 5, 10])
tf.math.greater_equal(x, y) ==> [True, True, True, False]

x = tf.constant([5, 4, 6, 7])
y = tf.constant([5])
tf.math.greater_equal(x, y) ==> [True, False, True, True]

Args:

Returns:

A Tensor of type bool.

__getitem__

View source

__getitem__(
    tensor, slice_spec, var=None
)

Overload for Tensor.__getitem__.

This operation extracts the specified region from the tensor. The notation is similar to NumPy with the restriction that currently only support basic indexing. That means that using a non-scalar tensor as input is not currently allowed.

Some useful examples:

# Strip leading and trailing 2 elements
foo = tf.constant([1,2,3,4,5,6])
print(foo[2:-2].eval())  # => [3,4]

# Skip every other row and reverse the order of the columns
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[::2,::-1].eval())  # => [[3,2,1], [9,8,7]]

# Use scalar tensors as indices on both dimensions
print(foo[tf.constant(0), tf.constant(2)].eval())  # => 3

# Insert another dimension
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[:, tf.newaxis, :].eval()) # => [[[1,2,3]], [[4,5,6]], [[7,8,9]]]
print(foo[:, :, tf.newaxis].eval()) # => [[[1],[2],[3]], [[4],[5],[6]],
[[7],[8],[9]]]

# Ellipses (3 equivalent operations)
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :].eval())  # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis, ...].eval())  # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis].eval())  # => [[[1,2,3], [4,5,6], [7,8,9]]]

# Masks
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[foo > 2].eval())  # => [3, 4, 5, 6, 7, 8, 9]

Notes:

Args:

Returns:

The appropriate slice of "tensor", based on "slice_spec".

Raises:

__gt__

__gt__(
    x, y, name=None
)

Returns the truth value of (x > y) element-wise.

NOTE: math.greater supports broadcasting. More about broadcasting here

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]

Args:

Returns:

A Tensor of type bool.

__invert__

__invert__(
    x, name=None
)

Returns the truth value of NOT x element-wise.

Args:

Returns:

A Tensor of type bool.

__iter__

View source

__iter__()

__le__

__le__(
    x, y, name=None
)

Returns the truth value of (x <= y) element-wise.

NOTE: math.less_equal supports broadcasting. More about broadcasting here

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]

x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]

Args:

Returns:

A Tensor of type bool.

__len__

View source

__len__()

__lt__

__lt__(
    x, y, name=None
)

Returns the truth value of (x < y) element-wise.

NOTE: math.less supports broadcasting. More about broadcasting here

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]

x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]

Args:

Returns:

A Tensor of type bool.

__matmul__

View source

__matmul__(
    x, y
)

Multiplies matrix a by matrix b, producing a * b.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.

A simple 2-D tensor matrix multiplication:

a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) a # 2-D tensor b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) b # 2-D tensor c = tf.matmul(a, b) c # a * b

A batch matrix multiplication with batch shape [2]

a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) a # 3-D tensor b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) b # 3-D tensor c = tf.matmul(a, b) c # a * b

Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent:

d = a @ b @ [[10], [11]] d = tf.matmul(tf.matmul(a, b), [[10], [11]])

Args:

Returns:

A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

Raises:

__mod__

View source

__mod__(
    x, y
)

Returns element-wise remainder of division. When x < 0 xor y < 0 is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x.

NOTE: math.floormod supports broadcasting. More about broadcasting here

Args:

Returns:

A Tensor. Has the same type as x.

__mul__

View source

__mul__(
    x, y
)

Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".

__ne__

View source

__ne__(
    other
)

Compares two tensors element-wise for equality.

__neg__

__neg__(
    x, name=None
)

Computes numerical negative value element-wise.

I.e., \(y = -x\).

Args:

Returns:

A Tensor. Has the same type as x.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)

__nonzero__

View source

__nonzero__()

Dummy method to prevent a tensor from being used as a Python bool.

This is the Python 2.x counterpart to __bool__() above.

Raises:

TypeError.

__or__

View source

__or__(
    x, y
)

Returns the truth value of x OR y element-wise.

NOTE: math.logical_or supports broadcasting. More about broadcasting here

Args:

Returns:

A Tensor of type bool.

__pow__

View source

__pow__(
    x, y
)

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(xy\) for corresponding elements in x and y. For example:

x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y)  # [[256, 65536], [9, 27]]

Args:

Returns:

A Tensor.

__radd__

View source

__radd__(
    y, x
)

Dispatches to add for strings and add_v2 for all other types.

__rand__

View source

__rand__(
    y, x
)

Returns the truth value of x AND y element-wise.

NOTE: math.logical_and supports broadcasting. More about broadcasting here

Args:

Returns:

A Tensor of type bool.

__rdiv__

View source

__rdiv__(
    y, x
)

Divide two values using Python 2 semantics.

Used for Tensor.__div__.

Args:

Returns:

x / y returns the quotient of x and y.

__rfloordiv__

View source

__rfloordiv__(
    y, x
)

Divides x / y elementwise, rounding toward the most negative integer.

The same as tf.compat.v1.div(x,y) for integers, but uses tf.floor(tf.compat.v1.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division.

x and y must have the same type, and the result will have the same type as well.

Args:

Returns:

x / y rounded down.

Raises:

__rmatmul__

View source

__rmatmul__(
    y, x
)

Multiplies matrix a by matrix b, producing a * b.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.

A simple 2-D tensor matrix multiplication:

a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) a # 2-D tensor b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) b # 2-D tensor c = tf.matmul(a, b) c # a * b

A batch matrix multiplication with batch shape [2]

a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) a # 3-D tensor b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) b # 3-D tensor c = tf.matmul(a, b) c # a * b

Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent:

d = a @ b @ [[10], [11]] d = tf.matmul(tf.matmul(a, b), [[10], [11]])

Args:

Returns:

A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

Raises:

__rmod__

View source

__rmod__(
    y, x
)

Returns element-wise remainder of division. When x < 0 xor y < 0 is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x.

NOTE: math.floormod supports broadcasting. More about broadcasting here

Args:

Returns:

A Tensor. Has the same type as x.

__rmul__

View source

__rmul__(
    y, x
)

Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".

__ror__

View source

__ror__(
    y, x
)

Returns the truth value of x OR y element-wise.

NOTE: math.logical_or supports broadcasting. More about broadcasting here

Args:

Returns:

A Tensor of type bool.

__rpow__

View source

__rpow__(
    y, x
)

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(xy\) for corresponding elements in x and y. For example:

x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y)  # [[256, 65536], [9, 27]]

Args:

Returns:

A Tensor.

__rsub__

View source

__rsub__(
    y, x
)

Returns x - y element-wise.

NOTE: Subtract supports broadcasting. More about broadcasting here

Args:

Returns:

A Tensor. Has the same type as x.

__rtruediv__

View source

__rtruediv__(
    y, x
)

__rxor__

View source

__rxor__(
    y, x
)

Logical XOR function.

x ^ y = (x | y) & ~(x & y)

Inputs are tensor and if the tensors contains more than one element, an element-wise logical XOR is computed.

Usage:

x = tf.constant([False, False, True, True], dtype = tf.bool)
y = tf.constant([False, True, False, True], dtype = tf.bool)
z = tf.logical_xor(x, y, name="LogicalXor")
#  here z = [False  True  True False]

Args:

Returns:

A Tensor of type bool with the same size as that of x or y.

__sub__

View source

__sub__(
    x, y
)

Returns x - y element-wise.

NOTE: Subtract supports broadcasting. More about broadcasting here

Args:

Returns:

A Tensor. Has the same type as x.

__truediv__

View source

__truediv__(
    x, y
)

__xor__

View source

__xor__(
    x, y
)

Logical XOR function.

x ^ y = (x | y) & ~(x & y)

Inputs are tensor and if the tensors contains more than one element, an element-wise logical XOR is computed.

Usage:

x = tf.constant([False, False, True, True], dtype = tf.bool)
y = tf.constant([False, True, False, True], dtype = tf.bool)
z = tf.logical_xor(x, y, name="LogicalXor")
#  here z = [False  True  True False]

Args:

Returns:

A Tensor of type bool with the same size as that of x or y.

consumers

View source

consumers()

Returns a list of Operations that consume this tensor.

Returns:

A list of Operations.

eval

View source

eval(
    feed_dict=None, session=None
)

Evaluates this tensor in a Session.

Calling this method will execute all preceding operations that produce the inputs needed for the operation that produces this tensor.

N.B. Before invoking Tensor.eval(), its graph must have been launched in a session, and either a default session must be available, or session must be specified explicitly.

Args:

Returns:

A numpy array corresponding to the value of this tensor.

experimental_ref

View source

experimental_ref()

Returns a hashable reference object to this Tensor.

Warning: Experimental API that could be changed or removed.

The primary usecase for this API is to put tensors in a set/dictionary. We can't put tensors in a set/dictionary as tensor.__hash__() is no longer available starting Tensorflow 2.0.

import tensorflow as tf

x = tf.constant(5)
y = tf.constant(10)
z = tf.constant(10)

# The followings will raise an exception starting 2.0
# TypeError: Tensor is unhashable if Tensor equality is enabled.
tensor_set = {x, y, z}
tensor_dict = {x: 'five', y: 'ten', z: 'ten'}

Instead, we can use tensor.experimental_ref().

tensor_set = {x.experimental_ref(),
              y.experimental_ref(),
              z.experimental_ref()}

print(x.experimental_ref() in tensor_set)
==> True

tensor_dict = {x.experimental_ref(): 'five',
               y.experimental_ref(): 'ten',
               z.experimental_ref(): 'ten'}

print(tensor_dict[y.experimental_ref()])
==> ten

Also, the reference object provides .deref() function that returns the original Tensor.

x = tf.constant(5)
print(x.experimental_ref().deref())
==> tf.Tensor(5, shape=(), dtype=int32)

get_shape

View source

get_shape()

Alias of Tensor.shape.

set_shape

View source

set_shape(
    shape
)

Updates the shape of this tensor.

This method can be called multiple times, and will merge the given shape with the current shape of this tensor. It can be used to provide additional information about the shape of this tensor that cannot be inferred from the graph alone. For example, this can be used to provide additional information about the shapes of images:

_, image_data = tf.compat.v1.TFRecordReader(...).read(...)
image = tf.image.decode_png(image_data, channels=3)

# The height and width dimensions of `image` are data dependent, and
# cannot be computed without executing the op.
print(image.shape)
==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])

# We know that each image in this dataset is 28 x 28 pixels.
image.set_shape([28, 28, 3])
print(image.shape)
==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])

NOTE: This shape is not enforced at runtime. Setting incorrect shapes can result in inconsistencies between the statically-known graph and the runtime value of tensors. For runtime validation of the shape, use tf.ensure_shape instead.

Args:

Raises:

Class Variables