View source on GitHub
|
Represents a ragged tensor.
tf.RaggedTensor(
values, row_splits, cached_row_lengths=None, cached_value_rowids=None,
cached_nrows=None, internal=False, uniform_row_length=None
)
A RaggedTensor is a tensor with one or more ragged dimensions, which are
dimensions whose slices may have different lengths. For example, the inner
(column) dimension of rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []] is ragged,
since the column slices (rt[0, :], ..., rt[4, :]) have different lengths.
Dimensions whose slices all have the same length are called uniform
dimensions. The outermost dimension of a RaggedTensor is always uniform,
since it consists of a single slice (and so there is no possibility for
differing slice lengths).
The total number of dimensions in a RaggedTensor is called its rank,
and the number of ragged dimensions in a RaggedTensor is called its
ragged-rank. A RaggedTensor's ragged-rank is fixed at graph creation
time: it can't depend on the runtime values of Tensors, and can't vary
dynamically for different session runs.
Many ops support both Tensors and RaggedTensors. The term "potentially
ragged tensor" may be used to refer to a tensor that might be either a
Tensor or a RaggedTensor. The ragged-rank of a Tensor is zero.
When documenting the shape of a RaggedTensor, ragged dimensions can be
indicated by enclosing them in parentheses. For example, the shape of
a 3-D RaggedTensor that stores the fixed-size word embedding for each
word in a sentence, for each sentence in a batch, could be written as
[num_sentences, (num_words), embedding_size]. The parentheses around
(num_words) indicate that dimension is ragged, and that the length
of each element list in that dimension may vary for each item.
Internally, a RaggedTensor consists of a concatenated list of values that
are partitioned into variable-length rows. In particular, each RaggedTensor
consists of:
A values tensor, which concatenates the variable-length rows into a
flattened list. For example, the values tensor for
[[3, 1, 4, 1], [], [5, 9, 2], [6], []] is [3, 1, 4, 1, 5, 9, 2, 6].
A row_splits vector, which indicates how those flattened values are
divided into rows. In particular, the values for row rt[i] are stored
in the slice rt.values[rt.row_splits[i]:rt.row_splits[i+1]].
>>> print(tf.RaggedTensor.from_row_splits(
... values=[3, 1, 4, 1, 5, 9, 2, 6],
... row_splits=[0, 4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
In addition to row_splits, ragged tensors provide support for four other
row-partitioning schemes:
row_lengths: a vector with shape [nrows], which specifies the length
of each row.
value_rowids and nrows: value_rowids is a vector with shape
[nvals], corresponding one-to-one with values, which specifies
each value's row index. In particular, the row rt[row] consists of the
values rt.values[j] where value_rowids[j]==row. nrows is an
integer scalar that specifies the number of rows in the
RaggedTensor. (nrows is used to indicate trailing empty rows.)
row_starts: a vector with shape [nrows], which specifies the start
offset of each row. Equivalent to row_splits[:-1].
row_limits: a vector with shape [nrows], which specifies the stop
offset of each row. Equivalent to row_splits[1:].
uniform_row_length: A scalar tensor, specifying the length of every
row. This row-partitioning scheme may only be used if all rows have
the same length.
Example: The following ragged tensors are equivalent, and all represent the
nested list [[3, 1, 4, 1], [], [5, 9, 2], [6], []].
>>> values = [3, 1, 4, 1, 5, 9, 2, 6]
>>> rt1 = RaggedTensor.from_row_splits(values, row_splits=[0, 4, 4, 7, 8, 8])
>>> rt2 = RaggedTensor.from_row_lengths(values, row_lengths=[4, 0, 3, 1, 0])
>>> rt3 = RaggedTensor.from_value_rowids(
... values, value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], nrows=5)
>>> rt4 = RaggedTensor.from_row_starts(values, row_starts=[0, 4, 4, 7, 8])
>>> rt5 = RaggedTensor.from_row_limits(values, row_limits=[4, 4, 7, 8, 8])
RaggedTensors with multiple ragged dimensions can be defined by using
a nested RaggedTensor for the values tensor. Each nested RaggedTensor
adds a single ragged dimension.
>>> inner_rt = RaggedTensor.from_row_splits( # =rt1 from above
... values=[3, 1, 4, 1, 5, 9, 2, 6], row_splits=[0, 4, 4, 7, 8, 8])
>>> outer_rt = RaggedTensor.from_row_splits(
... values=inner_rt, row_splits=[0, 3, 3, 5])
>>> print(outer_rt.to_list())
[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]
>>> print(outer_rt.ragged_rank)
2
The factory function RaggedTensor.from_nested_row_splits may be used to
construct a RaggedTensor with multiple ragged dimensions directly, by
providing a list of row_splits tensors:
>>> RaggedTensor.from_nested_row_splits(
... flat_values=[3, 1, 4, 1, 5, 9, 2, 6],
... nested_row_splits=([0, 3, 3, 5], [0, 4, 4, 7, 8, 8])).to_list()
[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]
RaggedTensors with uniform inner dimensions can be defined
by using a multidimensional Tensor for values.
>>> rt = RaggedTensor.from_row_splits(values=tf.ones([5, 3], tf.int32),
... row_splits=[0, 2, 5])
>>> print(rt.to_list())
[[[1, 1, 1], [1, 1, 1]],
[[1, 1, 1], [1, 1, 1], [1, 1, 1]]]
>>> print(rt.shape)
(2, None, 3)
RaggedTensors with uniform outer dimensions can be defined by using
one or more RaggedTensor with a uniform_row_length row-partitioning
tensor. For example, a RaggedTensor with shape [2, 2, None] can be
constructed with this method from a RaggedTensor values with shape
[4, None]:
>>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])
>>> print(values.shape)
(4, None)
>>> rt6 = tf.RaggedTensor.from_uniform_row_length(values, 2)
>>> print(rt6)
<tf.RaggedTensor [[[1, 2, 3], [4]], [[5, 6], [7, 8, 9, 10]]]>
>>> print(rt6.shape)
(2, 2, None)
Note that rt6 only contains one ragged dimension (the innermost
dimension). In contrast, if from_row_splits is used to construct a similar
RaggedTensor, then that RaggedTensor will have two ragged dimensions:
>>> rt7 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4])
>>> print(rt7.shape)
(2, None, None)
Uniform and ragged outer dimensions may be interleaved, meaning that a
tensor with any combination of ragged and uniform dimensions may be created.
For example, a RaggedTensor t4 with shape [3, None, 4, 8, None, 2] could
be constructed as follows:
t0 = tf.zeros([1000, 2]) # Shape: [1000, 2]
t1 = RaggedTensor.from_row_lengths(t0, [...]) # [160, None, 2]
t2 = RaggedTensor.from_uniform_row_length(t1, 8) # [20, 8, None, 2]
t3 = RaggedTensor.from_uniform_row_length(t2, 4) # [5, 4, 8, None, 2]
t4 = RaggedTensor.from_row_lengths(t3, [...]) # [3, None, 4, 8, None, 2]
values: A potentially ragged tensor of any dtype and shape [nvals, ...].row_splits: A 1-D integer tensor with shape [nrows+1].cached_row_lengths: A 1-D integer tensor with shape [nrows]cached_value_rowids: A 1-D integer tensor with shape [nvals].cached_nrows: A 1-D integer scalar tensor.internal: True if the constructor is being called by one of the factory
methods. If false, an exception will be raised.uniform_row_length: A scalar tensor.dtype: The DType of values in this tensor.flat_values: The innermost values tensor for this ragged tensor.
Concretely, if rt.values is a Tensor, then rt.flat_values is
rt.values; otherwise, rt.flat_values is rt.values.flat_values.
Conceptually, flat_values is the tensor formed by flattening the
outermost dimension and all of the ragged dimensions into a single
dimension.
rt.flat_values.shape = [nvals] + rt.shape[rt.ragged_rank + 1:]
(where nvals is the number of items in the flattened dimensions).
>>> rt = tf.ragged.constant([[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]])
>>> print(rt.flat_values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
nested_row_splits: A tuple containing the row_splits for all ragged dimensions.
rt.nested_row_splits is a tuple containing the row_splits tensors for
all ragged dimensions in rt, ordered from outermost to innermost. In
particular, rt.nested_row_splits = (rt.row_splits,) + value_splits where:
value_splits = () if rt.values is a Tensor.value_splits = rt.values.nested_row_splits otherwise.>>> rt = tf.ragged.constant(
... [[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]])
>>> for i, splits in enumerate(rt.nested_row_splits):
... print('Splits for dimension %d: %s' % (i+1, splits.numpy()))
Splits for dimension 1: [0 3]
Splits for dimension 2: [0 3 3 5]
Splits for dimension 3: [0 4 4 7 8 8]
ragged_rank: The number of ragged dimensions in this ragged tensor.
row_splits: The row-split indices for this ragged tensor's values.
rt.row_splits specifies where the values for each row begin and end in
rt.values. In particular, the values for row rt[i] are stored in
the slice rt.values[rt.row_splits[i]:rt.row_splits[i+1]].
>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.row_splits) # indices of row splits in rt.values
tf.Tensor([0 4 4 7 8 8], shape=(6,), dtype=int64)
shape: The statically known shape of this ragged tensor.
>>> tf.ragged.constant([[0], [1, 2]]).shape
TensorShape([2, None])
>>> tf.ragged.constant([[[0, 1]], [[1, 2], [3, 4]]], ragged_rank=1).shape
TensorShape([2, None, 2])
values: The concatenated rows for this ragged tensor.
rt.values is a potentially ragged tensor formed by flattening the two
outermost dimensions of rt into a single dimension.
rt.values.shape = [nvals] + rt.shape[2:] (where nvals is the
number of items in the outer two dimensions of rt).
rt.ragged_rank = self.ragged_rank - 1
>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
TypeError: If a row partitioning tensor has an inappropriate dtype.TypeError: If exactly one row partitioning argument was not specified.ValueError: If a row partitioning tensor has an inappropriate shape.ValueError: If multiple partitioning arguments are specified.ValueError: If nrows is specified but value_rowids is not None.__abs____abs__(
x, name=None
)
Computes the absolute value of a tensor.
Given a tensor of integer or floating-point values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input.
Given a tensor x of complex numbers, this operation returns a tensor of type
float32 or float64 that is the absolute value of each element in x. All
elements in x must be complex numbers of the form \(a + bj\). The
absolute value is computed as \( \sqrt{a2 + b2}\). For example:
python
x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])
tf.abs(x) # [5.25594902, 6.60492229]
x: A Tensor or SparseTensor of type float16, float32, float64,
int32, int64, complex64 or complex128.name: A name for the operation (optional).A Tensor or SparseTensor the same size, type, and sparsity as x with
absolute values.
Note, for complex64 or complex128 input, the returned Tensor will be
of type float32 or float64, respectively.
If x is a SparseTensor, returns
SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)
__add____add__(
x, y, name=None
)
Returns x + y element-wise.
NOTE: math.add supports broadcasting. AddN does not. More about broadcasting
here
x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor. Has the same type as x.
__and____and__(
x, y, name=None
)
Returns the truth value of x AND y element-wise.
NOTE: math.logical_and supports broadcasting. More about broadcasting
here
x: A Tensor of type bool.y: A Tensor of type bool.name: A name for the operation (optional).A Tensor of type bool.
__bool____bool__(
_
)
Dummy method to prevent a RaggedTensor from being used as a Python bool.
__div____div__(
x, y, name=None
)
Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide.
NOTE: Prefer using the Tensor division operator or tf.divide which obey Python 3 division operator semantics.
This function divides x and y, forcing Python 2 semantics. That is, if x
and y are both integers then the result will be an integer. This is in
contrast to Python 3, where division with / is always a float while division
with // is always an integer.
x: Tensor numerator of real numeric type.y: Tensor denominator of real numeric type.name: A name for the operation (optional).x / y returns the quotient of x and y.
__floordiv____floordiv__(
x, y, name=None
)
Divides x / y elementwise, rounding toward the most negative integer.
The same as tf.compat.v1.div(x,y) for integers, but uses
tf.floor(tf.compat.v1.div(x,y)) for
floating point arguments so that the result is always an integer (though
possibly an integer represented as floating point). This op is generated by
x // y floor division in Python 3 and in Python 2.7 with
from __future__ import division.
x and y must have the same type, and the result will have the same type
as well.
x: Tensor numerator of real numeric type.y: Tensor denominator of real numeric type.name: A name for the operation (optional).x / y rounded down.
TypeError: If the inputs are complex.__ge____ge__(
x, y, name=None
)
Returns the truth value of (x >= y) element-wise.
NOTE: math.greater_equal supports broadcasting. More about broadcasting
here
x = tf.constant([5, 4, 6, 7])
y = tf.constant([5, 2, 5, 10])
tf.math.greater_equal(x, y) ==> [True, True, True, False]
x = tf.constant([5, 4, 6, 7])
y = tf.constant([5])
tf.math.greater_equal(x, y) ==> [True, False, True, True]
x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor of type bool.
__getitem____getitem__(
key
)
Returns the specified piece of this RaggedTensor.
Supports multidimensional indexing and slicing, with one restriction: indexing into a ragged inner dimension is not allowed. This case is problematic because the indicated value may exist in some rows but not others. In such cases, it's not obvious whether we should (1) report an IndexError; (2) use a default value; or (3) skip that value and return a tensor with fewer rows than we started with. Following the guiding principles of Python ("In the face of ambiguity, refuse the temptation to guess"), we simply disallow this operation.
Any dimensions added by array_ops.newaxis will be ragged if the following
dimension is ragged.
self: The RaggedTensor to slice.key: Indicates which piece of the RaggedTensor to return, using standard
Python semantics (e.g., negative values index from the end). key
may have any of the following types:
int constantTensorslice containing integer constants and/or scalar integer
TensorsEllipsistf.newaxistuple containing any of the above (for multidimentional indexing)A Tensor or RaggedTensor object. Values that include at least one
ragged dimension are returned as RaggedTensor. Values that include no
ragged dimensions are returned as Tensor. See above for examples of
expressions that return Tensors vs RaggedTensors.
ValueError: If key is out of bounds.ValueError: If key is not supported.TypeError: If the indices in key have an unsupported type.>>> # A 2-D ragged tensor with 1 ragged dimension.
>>> rt = tf.ragged.constant([['a', 'b', 'c'], ['d', 'e'], ['f'], ['g']])
>>> rt[0].numpy() # First row (1-D `Tensor`)
array([b'a', b'b', b'c'], dtype=object)
>>> rt[:3].to_list() # First three rows (2-D RaggedTensor)
[[b'a', b'b', b'c'], [b'd', b'e'], [b'f']]
>>> rt[3, 0].numpy() # 1st element of 4th row (scalar)
b'g'
>>> # A 3-D ragged tensor with 2 ragged dimensions.
>>> rt = tf.ragged.constant([[[1, 2, 3], [4]],
... [[5], [], [6]],
... [[7]],
... [[8, 9], [10]]])
>>> rt[1].to_list() # Second row (2-D RaggedTensor)
[[5], [], [6]]
>>> rt[3, 0].numpy() # First element of fourth row (1-D Tensor)
array([8, 9], dtype=int32)
>>> rt[:, 1:3].to_list() # Items 1-3 of each row (3-D RaggedTensor)
[[[4]], [[], [6]], [], [[10]]]
>>> rt[:, -1:].to_list() # Last item of each row (3-D RaggedTensor)
[[[4]], [[6]], [[7]], [[10]]]
__gt____gt__(
x, y, name=None
)
Returns the truth value of (x > y) element-wise.
NOTE: math.greater supports broadcasting. More about broadcasting
here
x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]
x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor of type bool.
__invert____invert__(
x, name=None
)
Returns the truth value of NOT x element-wise.
x: A Tensor of type bool.name: A name for the operation (optional).A Tensor of type bool.
__le____le__(
x, y, name=None
)
Returns the truth value of (x <= y) element-wise.
NOTE: math.less_equal supports broadcasting. More about broadcasting
here
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]
x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor of type bool.
__lt____lt__(
x, y, name=None
)
Returns the truth value of (x < y) element-wise.
NOTE: math.less supports broadcasting. More about broadcasting
here
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]
x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor of type bool.
__mod____mod__(
x, y, name=None
)
Returns element-wise remainder of division. When x < 0 xor y < 0 is
true, this follows Python semantics in that the result here is consistent
with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x.
NOTE: math.floormod supports broadcasting. More about broadcasting
here
x: A Tensor. Must be one of the following types: int32, int64, bfloat16, half, float32, float64.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor. Has the same type as x.
__mul____mul__(
x, y, name=None
)
Returns x * y element-wise.
NOTE: tf.multiply supports broadcasting. More about broadcasting
here
x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor. Has the same type as x.
__neg____neg__(
x, name=None
)
Computes numerical negative value element-wise.
I.e., \(y = -x\).
x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.name: A name for the operation (optional).A Tensor. Has the same type as x.
If x is a SparseTensor, returns
SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)
__nonzero____nonzero__(
_
)
Dummy method to prevent a RaggedTensor from being used as a Python bool.
__or____or__(
x, y, name=None
)
Returns the truth value of x OR y element-wise.
NOTE: math.logical_or supports broadcasting. More about broadcasting
here
x: A Tensor of type bool.y: A Tensor of type bool.name: A name for the operation (optional).A Tensor of type bool.
__pow____pow__(
x, y, name=None
)
Computes the power of one value to another.
Given a tensor x and a tensor y, this operation computes \(xy\) for
corresponding elements in x and y. For example:
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
x: A Tensor of type float16, float32, float64, int32, int64,
complex64, or complex128.y: A Tensor of type float16, float32, float64, int32, int64,
complex64, or complex128.name: A name for the operation (optional).A Tensor.
__radd____radd__(
x, y, name=None
)
Returns x + y element-wise.
NOTE: math.add supports broadcasting. AddN does not. More about broadcasting
here
x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor. Has the same type as x.
__rand____rand__(
x, y, name=None
)
Returns the truth value of x AND y element-wise.
NOTE: math.logical_and supports broadcasting. More about broadcasting
here
x: A Tensor of type bool.y: A Tensor of type bool.name: A name for the operation (optional).A Tensor of type bool.
__rdiv____rdiv__(
x, y, name=None
)
Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide.
NOTE: Prefer using the Tensor division operator or tf.divide which obey Python 3 division operator semantics.
This function divides x and y, forcing Python 2 semantics. That is, if x
and y are both integers then the result will be an integer. This is in
contrast to Python 3, where division with / is always a float while division
with // is always an integer.
x: Tensor numerator of real numeric type.y: Tensor denominator of real numeric type.name: A name for the operation (optional).x / y returns the quotient of x and y.
__rfloordiv____rfloordiv__(
x, y, name=None
)
Divides x / y elementwise, rounding toward the most negative integer.
The same as tf.compat.v1.div(x,y) for integers, but uses
tf.floor(tf.compat.v1.div(x,y)) for
floating point arguments so that the result is always an integer (though
possibly an integer represented as floating point). This op is generated by
x // y floor division in Python 3 and in Python 2.7 with
from __future__ import division.
x and y must have the same type, and the result will have the same type
as well.
x: Tensor numerator of real numeric type.y: Tensor denominator of real numeric type.name: A name for the operation (optional).x / y rounded down.
TypeError: If the inputs are complex.__rmod____rmod__(
x, y, name=None
)
Returns element-wise remainder of division. When x < 0 xor y < 0 is
true, this follows Python semantics in that the result here is consistent
with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x.
NOTE: math.floormod supports broadcasting. More about broadcasting
here
x: A Tensor. Must be one of the following types: int32, int64, bfloat16, half, float32, float64.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor. Has the same type as x.
__rmul____rmul__(
x, y, name=None
)
Returns x * y element-wise.
NOTE: tf.multiply supports broadcasting. More about broadcasting
here
x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor. Has the same type as x.
__ror____ror__(
x, y, name=None
)
Returns the truth value of x OR y element-wise.
NOTE: math.logical_or supports broadcasting. More about broadcasting
here
x: A Tensor of type bool.y: A Tensor of type bool.name: A name for the operation (optional).A Tensor of type bool.
__rpow____rpow__(
x, y, name=None
)
Computes the power of one value to another.
Given a tensor x and a tensor y, this operation computes \(xy\) for
corresponding elements in x and y. For example:
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
x: A Tensor of type float16, float32, float64, int32, int64,
complex64, or complex128.y: A Tensor of type float16, float32, float64, int32, int64,
complex64, or complex128.name: A name for the operation (optional).A Tensor.
__rsub____rsub__(
x, y, name=None
)
Returns x - y element-wise.
NOTE: Subtract supports broadcasting. More about broadcasting
here
x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor. Has the same type as x.
__rtruediv____rtruediv__(
x, y, name=None
)
Divides x / y elementwise (using Python 3 division operator semantics).
NOTE: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics.
This function forces Python 3 division operator semantics where all integer
arguments are cast to floating types first. This op is generated by normal
x / y division in Python 3 and in Python 2.7 with
from __future__ import division. If you want integer division that rounds
down, use x // y or tf.math.floordiv.
x and y must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32 for int8 and int16 and float64 for int32
and int64 (matching the behavior of Numpy).
x: Tensor numerator of numeric type.y: Tensor denominator of numeric type.name: A name for the operation (optional).x / y evaluated in floating point.
TypeError: If x and y have different dtypes.__rxor____rxor__(
x, y, name='LogicalXor'
)
Logical XOR function.
x ^ y = (x | y) & ~(x & y)
Inputs are tensor and if the tensors contains more than one element, an element-wise logical XOR is computed.
x = tf.constant([False, False, True, True], dtype = tf.bool)
y = tf.constant([False, True, False, True], dtype = tf.bool)
z = tf.logical_xor(x, y, name="LogicalXor")
# here z = [False True True False]
x: A Tensor type bool.y: A Tensor of type bool.A Tensor of type bool with the same size as that of x or y.
__sub____sub__(
x, y, name=None
)
Returns x - y element-wise.
NOTE: Subtract supports broadcasting. More about broadcasting
here
x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.y: A Tensor. Must have the same type as x.name: A name for the operation (optional).A Tensor. Has the same type as x.
__truediv____truediv__(
x, y, name=None
)
Divides x / y elementwise (using Python 3 division operator semantics).
NOTE: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics.
This function forces Python 3 division operator semantics where all integer
arguments are cast to floating types first. This op is generated by normal
x / y division in Python 3 and in Python 2.7 with
from __future__ import division. If you want integer division that rounds
down, use x // y or tf.math.floordiv.
x and y must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32 for int8 and int16 and float64 for int32
and int64 (matching the behavior of Numpy).
x: Tensor numerator of numeric type.y: Tensor denominator of numeric type.name: A name for the operation (optional).x / y evaluated in floating point.
TypeError: If x and y have different dtypes.__xor____xor__(
x, y, name='LogicalXor'
)
Logical XOR function.
x ^ y = (x | y) & ~(x & y)
Inputs are tensor and if the tensors contains more than one element, an element-wise logical XOR is computed.
x = tf.constant([False, False, True, True], dtype = tf.bool)
y = tf.constant([False, True, False, True], dtype = tf.bool)
z = tf.logical_xor(x, y, name="LogicalXor")
# here z = [False True True False]
x: A Tensor type bool.y: A Tensor of type bool.A Tensor of type bool with the same size as that of x or y.
bounding_shapebounding_shape(
axis=None, name=None, out_type=None
)
Returns the tight bounding box shape for this RaggedTensor.
axis: An integer scalar or vector indicating which axes to return the
bounding box for. If not specified, then the full bounding box is
returned.name: A name prefix for the returned tensor (optional).out_type: dtype for the returned tensor. Defaults to
self.row_splits.dtype.An integer Tensor (dtype=self.row_splits.dtype). If axis is not
specified, then output is a vector with
output.shape=[self.shape.ndims]. If axis is a scalar, then the
output is a scalar. If axis is a vector, then output is a vector,
where output[i] is the bounding size for dimension axis[i].
>>> rt = tf.ragged.constant([[1, 2, 3, 4], [5], [], [6, 7, 8, 9], [10]])
>>> rt.bounding_shape().numpy()
array([5, 4])
consumersconsumers()
from_nested_row_lengths@classmethod
from_nested_row_lengths(
flat_values, nested_row_lengths, name=None, validate=True
)
Creates a RaggedTensor from a nested list of row_lengths tensors.
result = flat_values
for row_lengths in reversed(nested_row_lengths):
result = from_row_lengths(result, row_lengths)
flat_values: A potentially ragged tensor.nested_row_lengths: A list of 1-D integer tensors. The ith tensor is
used as the row_lengths for the ith ragged dimension.name: A name prefix for the RaggedTensor (optional).validate: If true, then use assertions to check that the arguments form
a valid RaggedTensor.A RaggedTensor (or flat_values if nested_row_lengths is empty).
from_nested_row_splits@classmethod
from_nested_row_splits(
flat_values, nested_row_splits, name=None, validate=True
)
Creates a RaggedTensor from a nested list of row_splits tensors.
result = flat_values
for row_splits in reversed(nested_row_splits):
result = from_row_splits(result, row_splits)
flat_values: A potentially ragged tensor.nested_row_splits: A list of 1-D integer tensors. The ith tensor is
used as the row_splits for the ith ragged dimension.name: A name prefix for the RaggedTensor (optional).validate: If true, then use assertions to check that the arguments form a
valid RaggedTensor.A RaggedTensor (or flat_values if nested_row_splits is empty).
from_nested_value_rowids@classmethod
from_nested_value_rowids(
flat_values, nested_value_rowids, nested_nrows=None, name=None, validate=True
)
Creates a RaggedTensor from a nested list of value_rowids tensors.
result = flat_values
for (rowids, nrows) in reversed(zip(nested_value_rowids, nested_nrows)):
result = from_value_rowids(result, rowids, nrows)
flat_values: A potentially ragged tensor.nested_value_rowids: A list of 1-D integer tensors. The ith tensor is
used as the value_rowids for the ith ragged dimension.nested_nrows: A list of integer scalars. The ith scalar is used as the
nrows for the ith ragged dimension.name: A name prefix for the RaggedTensor (optional).
validate: If true, then use assertions to check that the arguments form
a valid RaggedTensor.
A RaggedTensor (or flat_values if nested_value_rowids is empty).
ValueError: If len(nested_values_rowids) != len(nested_nrows).from_row_lengths@classmethod
from_row_lengths(
values, row_lengths, name=None, validate=True
)
Creates a RaggedTensor with rows partitioned by row_lengths.
The returned RaggedTensor corresponds with the python list defined by:
result = [[values.pop(0) for i in range(length)]
for length in row_lengths]
values: A potentially ragged tensor with shape [nvals, ...].row_lengths: A 1-D integer tensor with shape [nrows]. Must be
nonnegative. sum(row_lengths) must be nvals.name: A name prefix for the RaggedTensor (optional).validate: If true, then use assertions to check that the arguments form
a valid RaggedTensor.A RaggedTensor. result.rank = values.rank + 1.
result.ragged_rank = values.ragged_rank + 1.
>>> print(tf.RaggedTensor.from_row_lengths(
... values=[3, 1, 4, 1, 5, 9, 2, 6],
... row_lengths=[4, 0, 3, 1, 0]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
from_row_limits@classmethod
from_row_limits(
values, row_limits, name=None, validate=True
)
Creates a RaggedTensor with rows partitioned by row_limits.
Equivalent to: from_row_splits(values, concat([0, row_limits])).
values: A potentially ragged tensor with shape [nvals, ...].row_limits: A 1-D integer tensor with shape [nrows]. Must be sorted in
ascending order. If nrows>0, then row_limits[-1] must be nvals.name: A name prefix for the RaggedTensor (optional).validate: If true, then use assertions to check that the arguments form
a valid RaggedTensor.A RaggedTensor. result.rank = values.rank + 1.
result.ragged_rank = values.ragged_rank + 1.
>>> print(tf.RaggedTensor.from_row_limits(
... values=[3, 1, 4, 1, 5, 9, 2, 6],
... row_limits=[4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
from_row_splits@classmethod
from_row_splits(
values, row_splits, name=None, validate=True
)
Creates a RaggedTensor with rows partitioned by row_splits.
The returned RaggedTensor corresponds with the python list defined by:
result = [values[row_splits[i]:row_splits[i + 1]]
for i in range(len(row_splits) - 1)]
values: A potentially ragged tensor with shape [nvals, ...].row_splits: A 1-D integer tensor with shape [nrows+1]. Must not be
empty, and must be sorted in ascending order. row_splits[0] must be
zero and row_splits[-1] must be nvals.name: A name prefix for the RaggedTensor (optional).validate: If true, then use assertions to check that the arguments form
a valid RaggedTensor.A RaggedTensor. result.rank = values.rank + 1.
result.ragged_rank = values.ragged_rank + 1.
ValueError: If row_splits is an empty list.>>> print(tf.RaggedTensor.from_row_splits(
... values=[3, 1, 4, 1, 5, 9, 2, 6],
... row_splits=[0, 4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
from_row_starts@classmethod
from_row_starts(
values, row_starts, name=None, validate=True
)
Creates a RaggedTensor with rows partitioned by row_starts.
Equivalent to: from_row_splits(values, concat([row_starts, nvals])).
values: A potentially ragged tensor with shape [nvals, ...].row_starts: A 1-D integer tensor with shape [nrows]. Must be
nonnegative and sorted in ascending order. If nrows>0, then
row_starts[0] must be zero.name: A name prefix for the RaggedTensor (optional).validate: If true, then use assertions to check that the arguments form
a valid RaggedTensor.A RaggedTensor. result.rank = values.rank + 1.
result.ragged_rank = values.ragged_rank + 1.
>>> print(tf.RaggedTensor.from_row_starts(
... values=[3, 1, 4, 1, 5, 9, 2, 6],
... row_starts=[0, 4, 4, 7, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
from_sparse@classmethod
from_sparse(
st_input, name=None, row_splits_dtype=tf.dtypes.int64
)
Converts a 2D tf.SparseTensor to a RaggedTensor.
Each row of the output RaggedTensor will contain the explicit values
from the same row in st_input. st_input must be ragged-right. If not
it is not ragged-right, then an error will be generated.
>>> st = tf.SparseTensor(indices=[[0, 0], [0, 1], [0, 2], [1, 0], [3, 0]],
... values=[1, 2, 3, 4, 5],
... dense_shape=[4, 3])
>>> tf.RaggedTensor.from_sparse(st).to_list()
[[1, 2, 3], [4], [], [5]]
Currently, only two-dimensional SparseTensors are supported.
st_input: The sparse tensor to convert. Must have rank 2.name: A name prefix for the returned tensors (optional).row_splits_dtype: dtype for the returned RaggedTensor's row_splits
tensor. One of tf.int32 or tf.int64.A RaggedTensor with the same values as st_input.
output.ragged_rank = rank(st_input) - 1.
output.shape = [st_input.dense_shape[0], None].
ValueError: If the number of dimensions in st_input is not known
statically, or is not two.from_tensor@classmethod
from_tensor(
tensor, lengths=None, padding=None, ragged_rank=1, name=None,
row_splits_dtype=tf.dtypes.int64
)
Converts a tf.Tensor into a RaggedTensor.
The set of absent/default values may be specified using a vector of lengths
or a padding value (but not both). If lengths is specified, then the
output tensor will satisfy output[row] = tensor[row][:lengths[row]]. If
'lengths' is a list of lists or tuple of lists, those lists will be used
as nested row lengths. If padding is specified, then any row suffix
consisting entirely of padding will be excluded from the returned
RaggedTensor. If neither lengths nor padding is specified, then the
returned RaggedTensor will have no absent/default values.
>>> dt = tf.constant([[5, 7, 0], [0, 3, 0], [6, 0, 0]])
>>> tf.RaggedTensor.from_tensor(dt)
<tf.RaggedTensor [[5, 7, 0], [0, 3, 0], [6, 0, 0]]>
>>> tf.RaggedTensor.from_tensor(dt, lengths=[1, 0, 3])
<tf.RaggedTensor [[5], [], [6, 0, 0]]>
>>> tf.RaggedTensor.from_tensor(dt, padding=0)
<tf.RaggedTensor [[5, 7], [0, 3], [6]]>
>>> dt = tf.constant([[[5, 0], [7, 0], [0, 0]],
... [[0, 0], [3, 0], [0, 0]],
... [[6, 0], [0, 0], [0, 0]]])
>>> tf.RaggedTensor.from_tensor(dt, lengths=([2, 0, 3], [1, 1, 2, 0, 1]))
<tf.RaggedTensor [[[5], [7]], [], [[6, 0], [], [0]]]>
tensor: The Tensor to convert. Must have rank ragged_rank + 1 or
higher.lengths: An optional set of row lengths, specified using a 1-D integer
Tensor whose length is equal to tensor.shape[0] (the number of rows
in tensor). If specified, then output[row] will contain
tensor[row][:lengths[row]]. Negative lengths are treated as zero. You
may optionally pass a list or tuple of lengths to this argument, which
will be used as nested row lengths to construct a ragged tensor with
multiple ragged dimensions.padding: An optional padding value. If specified, then any row suffix
consisting entirely of padding will be excluded from the returned
RaggedTensor. padding is a Tensor with the same dtype as tensor
and with shape=tensor.shape[ragged_rank + 1:].ragged_rank: Integer specifying the ragged rank for the returned
RaggedTensor. Must be greater than zero.name: A name prefix for the returned tensors (optional).row_splits_dtype: dtype for the returned RaggedTensor's row_splits
tensor. One of tf.int32 or tf.int64.A RaggedTensor with the specified ragged_rank. The shape of the
returned ragged tensor is compatible with the shape of tensor.
ValueError: If both lengths and padding are specified.from_uniform_row_length@classmethod
from_uniform_row_length(
values, uniform_row_length, nrows=None, validate=True, name=None
)
Creates a RaggedTensor with rows partitioned by uniform_row_length.
This method can be used to create RaggedTensors with multiple uniform
outer dimensions. For example, a RaggedTensor with shape [2, 2, None]
can be constructed with this method from a RaggedTensor values with shape
[4, None]:
>>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])
>>> print(values.shape)
(4, None)
>>> rt1 = tf.RaggedTensor.from_uniform_row_length(values, 2)
>>> print(rt1)
<tf.RaggedTensor [[[1, 2, 3], [4]], [[5, 6], [7, 8, 9, 10]]]>
>>> print(rt1.shape)
(2, 2, None)
Note that rt1 only contains one ragged dimension (the innermost
dimension). In contrast, if from_row_splits is used to construct a similar
RaggedTensor, then that RaggedTensor will have two ragged dimensions:
>>> rt2 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4])
>>> print(rt2.shape)
(2, None, None)
values: A potentially ragged tensor with shape [nvals, ...].uniform_row_length: A scalar integer tensor. Must be nonnegative.
The size of the outer axis of values must be evenly divisible by
uniform_row_length.nrows: The number of rows in the constructed RaggedTensor. If not
specified, then it defaults to nvals/uniform_row_length (or 0 if
uniform_row_length==0). nrows only needs to be specified if
uniform_row_length might be zero. uniform_row_length*nrows must
be nvals.validate: If true, then use assertions to check that the arguments form
a valid RaggedTensor.name: A name prefix for the RaggedTensor (optional).A RaggedTensor that corresponds with the python list defined by:
result = [[values.pop(0) for i in range(uniform_row_length)]
for _ in range(nrows)]
result.rank = values.rank + 1.
result.ragged_rank = values.ragged_rank + 1.
from_value_rowids@classmethod
from_value_rowids(
values, value_rowids, nrows=None, name=None, validate=True
)
Creates a RaggedTensor with rows partitioned by value_rowids.
The returned RaggedTensor corresponds with the python list defined by:
result = [[values[i] for i in range(len(values)) if value_rowids[i] == row]
for row in range(nrows)]
values: A potentially ragged tensor with shape [nvals, ...].value_rowids: A 1-D integer tensor with shape [nvals], which corresponds
one-to-one with values, and specifies each value's row index. Must be
nonnegative, and must be sorted in ascending order.nrows: An integer scalar specifying the number of rows. This should be
specified if the RaggedTensor may containing empty training rows. Must
be greater than value_rowids[-1] (or zero if value_rowids is empty).
Defaults to value_rowids[-1] (or zero if value_rowids is empty).name: A name prefix for the RaggedTensor (optional).validate: If true, then use assertions to check that the arguments form
a valid RaggedTensor.A RaggedTensor. result.rank = values.rank + 1.
result.ragged_rank = values.ragged_rank + 1.
ValueError: If nrows is incompatible with value_rowids.>>> print(tf.RaggedTensor.from_value_rowids(
... values=[3, 1, 4, 1, 5, 9, 2, 6],
... value_rowids=[0, 0, 0, 0, 2, 2, 2, 3],
... nrows=5))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
merge_dimsmerge_dims(
outer_axis, inner_axis
)
Merges outer_axis...inner_axis into a single dimension.
Returns a copy of this RaggedTensor with the specified range of dimensions flattened into a single dimension, with elements in row-major order.
>>> rt = tf.ragged.constant([[[1, 2], [3]], [[4, 5, 6]]])
>>> print(rt.merge_dims(0, 1))
<tf.RaggedTensor [[1, 2], [3], [4, 5, 6]]>
>>> print(rt.merge_dims(1, 2))
<tf.RaggedTensor [[1, 2, 3], [4, 5, 6]]>
>>> print(rt.merge_dims(0, 2))
tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32)
To mimic the behavior of np.flatten (which flattens all dimensions), use
rt.merge_dims(0, -1). To mimic the behavior oftf.layers.Flatten(which
flattens all dimensions except the outermost batch dimension), use
rt.merge_dims(1, -1)`.
outer_axis: int: The first dimension in the range of dimensions to
merge. May be negative if self.shape.rank is statically known.inner_axis: int: The last dimension in the range of dimensions to
merge. May be negative if self.shape.rank is statically known.A copy of this tensor, with the specified dimensions merged into a
single dimension. The shape of the returned tensor will be
self.shape[:outer_axis] + [N] + self.shape[inner_axis + 1:], where N
is the total number of slices in the merged dimensions.
nested_row_lengthsnested_row_lengths(
name=None
)
Returns a tuple containing the row_lengths for all ragged dimensions.
rt.nested_row_lengths() is a tuple containing the row_lengths tensors
for all ragged dimensions in rt, ordered from outermost to innermost.
name: A name prefix for the returned tensors (optional).A tuple of 1-D integer Tensors. The length of the tuple is equal to
self.ragged_rank.
nested_value_rowidsnested_value_rowids(
name=None
)
Returns a tuple containing the value_rowids for all ragged dimensions.
rt.nested_value_rowids is a tuple containing the value_rowids tensors
for
all ragged dimensions in rt, ordered from outermost to innermost. In
particular, rt.nested_value_rowids = (rt.value_rowids(),) + value_ids
where:
* `value_ids = ()` if `rt.values` is a `Tensor`.
* `value_ids = rt.values.nested_value_rowids` otherwise.
name: A name prefix for the returned tensors (optional).A tuple of 1-D integer Tensors.
>>> rt = tf.ragged.constant(
... [[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]])
>>> for i, ids in enumerate(rt.nested_value_rowids()):
... print('row ids for dimension %d: %s' % (i+1, ids.numpy()))
row ids for dimension 1: [0 0 0]
row ids for dimension 2: [0 0 0 2 2]
row ids for dimension 3: [0 0 0 0 2 2 2 3]
nrowsnrows(
out_type=None, name=None
)
Returns the number of rows in this ragged tensor.
I.e., the size of the outermost dimension of the tensor.
out_type: dtype for the returned tensor. Defaults to
self.row_splits.dtype.name: A name prefix for the returned tensor (optional).A scalar Tensor with dtype out_type.
>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.nrows()) # rt has 5 rows.
tf.Tensor(5, shape=(), dtype=int64)
row_lengthsrow_lengths(
axis=1, name=None
)
Returns the lengths of the rows in this ragged tensor.
rt.row_lengths()[i] indicates the number of values in the
ith row of rt.
axis: An integer constant indicating the axis whose row lengths should be
returned.name: A name prefix for the returned tensor (optional).A potentially ragged integer Tensor with shape self.shape[:axis].
ValueError: If axis is out of bounds.>>> rt = tf.ragged.constant(
... [[[3, 1, 4], [1]], [], [[5, 9], [2]], [[6]], []])
>>> print(rt.row_lengths()) # lengths of rows in rt
tf.Tensor([2 0 2 1 0], shape=(5,), dtype=int64)
>>> print(rt.row_lengths(axis=2)) # lengths of axis=2 rows.
<tf.RaggedTensor [[3, 1], [], [2, 1], [1], []]>
row_limitsrow_limits(
name=None
)
Returns the limit indices for rows in this ragged tensor.
These indices specify where the values for each row end in
self.values. rt.row_limits(self) is equal to rt.row_splits[:-1].
name: A name prefix for the returned tensor (optional).A 1-D integer Tensor with shape [nrows].
The returned tensor is nonnegative, and is sorted in ascending order.
>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
>>> print(rt.row_limits()) # indices of row limits in rt.values
tf.Tensor([4 4 7 8 8], shape=(5,), dtype=int64)
row_startsrow_starts(
name=None
)
Returns the start indices for rows in this ragged tensor.
These indices specify where the values for each row begin in
self.values. rt.row_starts() is equal to rt.row_splits[:-1].
name: A name prefix for the returned tensor (optional).A 1-D integer Tensor with shape [nrows].
The returned tensor is nonnegative, and is sorted in ascending order.
>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
>>> print(rt.row_starts()) # indices of row starts in rt.values
tf.Tensor([0 4 4 7 8], shape=(5,), dtype=int64)
to_listto_list()
Returns a nested Python list with the values for this RaggedTensor.
Requires that rt was constructed in eager execution mode.
A nested Python list.
to_sparseto_sparse(
name=None
)
Converts this RaggedTensor into a tf.SparseTensor.
>>> rt = tf.ragged.constant([[1, 2, 3], [4], [], [5, 6]])
>>> print(rt.to_sparse())
SparseTensor(indices=tf.Tensor(
[[0 0] [0 1] [0 2] [1 0] [3 0] [3 1]],
shape=(6, 2), dtype=int64),
values=tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32),
dense_shape=tf.Tensor([4 3], shape=(2,), dtype=int64))
name: A name prefix for the returned tensors (optional).A SparseTensor with the same values as self.
to_tensorto_tensor(
default_value=None, name=None, shape=None
)
Converts this RaggedTensor into a tf.Tensor.
If shape is specified, then the result is padded and/or truncated to
the specified shape.
>>> rt = tf.ragged.constant([[9, 8, 7], [], [6, 5], [4]])
>>> print(rt.to_tensor())
tf.Tensor(
[[9 8 7] [0 0 0] [6 5 0] [4 0 0]], shape=(4, 3), dtype=int32)
>>> print(rt.to_tensor(shape=[5, 2]))
tf.Tensor(
[[9 8] [0 0] [6 5] [4 0] [0 0]], shape=(5, 2), dtype=int32)
default_value: Value to set for indices not specified in self. Defaults
to zero. default_value must be broadcastable to
self.shape[self.ragged_rank + 1:].name: A name prefix for the returned tensors (optional).shape: The shape of the resulting dense tensor. In particular,
result.shape[i] is shape[i] (if shape[i] is not None), or
self.bounding_shape(i) (otherwise).shape.rank must be None or
equal to self.rank.A Tensor with shape ragged.bounding_shape(self) and the
values specified by the non-empty values in self. Empty values are
assigned default_value.
value_rowidsvalue_rowids(
name=None
)
Returns the row indices for the values in this ragged tensor.
rt.value_rowids() corresponds one-to-one with the outermost dimension of
rt.values, and specifies the row containing each value. In particular,
the row rt[row] consists of the values rt.values[j] where
rt.value_rowids()[j] == row.
name: A name prefix for the returned tensor (optional).A 1-D integer Tensor with shape self.values.shape[:1].
The returned tensor is nonnegative, and is sorted in ascending order.
>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
>>> print(rt.value_rowids()) # corresponds 1:1 with rt.values
tf.Tensor([0 0 0 0 2 2 2 3], shape=(8,), dtype=int64)
with_flat_valueswith_flat_values(
new_values
)
Returns a copy of self with flat_values replaced by new_value.
Preserves cached row-partitioning tensors such as self.cached_nrows and
self.cached_value_rowids if they have values.
new_values: Potentially ragged tensor that should replace
self.flat_values. Must have rank > 0, and must have the same
number of rows as self.flat_values.A RaggedTensor.
result.rank = self.ragged_rank + new_values.rank.
result.ragged_rank = self.ragged_rank + new_values.ragged_rank.
with_row_splits_dtypewith_row_splits_dtype(
dtype
)
Returns a copy of this RaggedTensor with the given row_splits dtype.
For RaggedTensors with multiple ragged dimensions, the row_splits for all
nested RaggedTensor objects are cast to the given dtype.
A copy of this RaggedTensor, with the row_splits cast to the given
type.
with_valueswith_values(
new_values
)
Returns a copy of self with values replaced by new_value.
Preserves cached row-partitioning tensors such as self.cached_nrows and
self.cached_value_rowids if they have values.
new_values: Potentially ragged tensor to use as the values for the
returned RaggedTensor. Must have rank > 0, and must have the same
number of rows as self.values.A RaggedTensor. result.rank = 1 + new_values.rank.
result.ragged_rank = 1 + new_values.ragged_rank