torch.Tensor¶
A torch.Tensor is a multi-dimensional matrix containing elements of
a single data type.
Torch defines eight CPU tensor types and eight GPU tensor types:
| Data type | dtype | CPU tensor | GPU tensor |
|---|---|---|---|
| 32-bit floating point | torch.float32 or torch.float |
torch.FloatTensor |
torch.cuda.FloatTensor |
| 64-bit floating point | torch.float64 or torch.double |
torch.DoubleTensor |
torch.cuda.DoubleTensor |
| 16-bit floating point | torch.float16 or torch.half |
torch.HalfTensor |
torch.cuda.HalfTensor |
| 8-bit integer (unsigned) | torch.uint8 |
torch.ByteTensor |
torch.cuda.ByteTensor |
| 8-bit integer (signed) | torch.int8 |
torch.CharTensor |
torch.cuda.CharTensor |
| 16-bit integer (signed) | torch.int16 or torch.short |
torch.ShortTensor |
torch.cuda.ShortTensor |
| 32-bit integer (signed) | torch.int32 or torch.int |
torch.IntTensor |
torch.cuda.IntTensor |
| 64-bit integer (signed) | torch.int64 or torch.long |
torch.LongTensor |
torch.cuda.LongTensor |
torch.Tensor is an alias for the default tensor type (torch.FloatTensor).
A tensor can be constructed from a Python list or sequence using the
torch.tensor() constructor:
>>> torch.tensor([[1., -1.], [1., -1.]])
tensor([[ 1.0000, -1.0000],
[ 1.0000, -1.0000]])
>>> torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
Warning
torch.tensor() always copies data. If you have a Tensor
data and just want to change its requires_grad flag, use
requires_grad_() or
detach() to avoid a copy.
If you have a numpy array and want to avoid a copy, use
torch.as_tensor().
A tensor of specific data type can be constructed by passing a
torch.dtype and/or a torch.device to a
constructor or tensor creation op:
>>> torch.zeros([2, 4], dtype=torch.int32)
tensor([[ 0, 0, 0, 0],
[ 0, 0, 0, 0]], dtype=torch.int32)
>>> cuda0 = torch.device('cuda:0')
>>> torch.ones([2, 4], dtype=torch.float64, device=cuda0)
tensor([[ 1.0000, 1.0000, 1.0000, 1.0000],
[ 1.0000, 1.0000, 1.0000, 1.0000]], dtype=torch.float64, device='cuda:0')
The contents of a tensor can be accessed and modified using Python’s indexing and slicing notation:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6]])
>>> print(x[1][2])
tensor(6)
>>> x[0][1] = 8
>>> print(x)
tensor([[ 1, 8, 3],
[ 4, 5, 6]])
Use torch.Tensor.item() to get a Python number from a tensor containing a
single value:
>>> x = torch.tensor([[1]])
>>> x
tensor([[ 1]])
>>> x.item()
1
>>> x = torch.tensor(2.5)
>>> x
tensor(2.5000)
>>> x.item()
2.5
A tensor can be created with requires_grad=True so that
torch.autograd records operations on them for automatic differentiation.
>>> x = torch.tensor([[1., -1.], [1., 1.]], requires_grad=True)
>>> out = x.pow(2).sum()
>>> out.backward()
>>> x.grad
tensor([[ 2.0000, -2.0000],
[ 2.0000, 2.0000]])
Each tensor has an associated torch.Storage, which holds its data.
The tensor class provides multi-dimensional, strided
view of a storage and defines numeric operations on it.
Note
For more information on the torch.dtype, torch.device, and
torch.layout attributes of a torch.Tensor, see
Tensor Attributes.
Note
Methods which mutate a tensor are marked with an underscore suffix.
For example, torch.FloatTensor.abs_() computes the absolute value
in-place and returns the modified tensor, while torch.FloatTensor.abs()
computes the result in a new tensor.
Note
To change an existing tensor’s torch.device and/or torch.dtype, consider using
to() method on the tensor.
-
class
torch.Tensor¶ There are a few main ways to create a tensor, depending on your use case.
- To create a tensor with pre-existing data, use
torch.tensor(). - To create a tensor with specific size, use
torch.*tensor creation ops (see Creation Ops). - To create a tensor with the same size (and similar types) as another tensor,
use
torch.*_liketensor creation ops (see Creation Ops). - To create a tensor with similar type but different size as another tensor,
use
tensor.new_*creation ops.
-
new_tensor(data, dtype=None, device=None, requires_grad=False) → Tensor¶ Returns a new Tensor with
dataas the tensor data. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.Warning
new_tensor()always copiesdata. If you have a Tensordataand want to avoid a copy, usetorch.Tensor.requires_grad_()ortorch.Tensor.detach(). If you have a numpy array and want to avoid a copy, usetorch.from_numpy().Warning
When data is a tensor x,
new_tensor()reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Thereforetensor.new_tensor(x)is equivalent tox.clone().detach()andtensor.new_tensor(x, requires_grad=True)is equivalent tox.clone().detach().requires_grad_(True). The equivalents usingclone()anddetach()are recommended.Parameters: - data (array_like) – The returned Tensor copies
data. - dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor. - device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor. - requires_grad (bool, optional) – If autograd should record operations on the
returned tensor. Default:
False.
Example:
>>> tensor = torch.ones((2,), dtype=torch.int8) >>> data = [[0, 1], [2, 3]] >>> tensor.new_tensor(data) tensor([[ 0, 1], [ 2, 3]], dtype=torch.int8)
- data (array_like) – The returned Tensor copies
-
new_full(size, fill_value, dtype=None, device=None, requires_grad=False) → Tensor¶ Returns a Tensor of size
sizefilled withfill_value. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.Parameters: - fill_value (scalar) – the number to fill the output tensor with.
- dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor. - device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor. - requires_grad (bool, optional) – If autograd should record operations on the
returned tensor. Default:
False.
Example:
>>> tensor = torch.ones((2,), dtype=torch.float64) >>> tensor.new_full((3, 4), 3.141592) tensor([[ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)
-
new_empty(size, dtype=None, device=None, requires_grad=False) → Tensor¶ Returns a Tensor of size
sizefilled with uninitialized data. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.Parameters: - dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor. - device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor. - requires_grad (bool, optional) – If autograd should record operations on the
returned tensor. Default:
False.
Example:
>>> tensor = torch.ones(()) >>> tensor.new_empty((2, 3)) tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30], [ 3.0949e-41, 4.4842e-44, 0.0000e+00]])
- dtype (
-
new_ones(size, dtype=None, device=None, requires_grad=False) → Tensor¶ Returns a Tensor of size
sizefilled with1. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.Parameters: - size (int...) – a list, tuple, or
torch.Sizeof integers defining the shape of the output tensor. - dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor. - device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor. - requires_grad (bool, optional) – If autograd should record operations on the
returned tensor. Default:
False.
Example:
>>> tensor = torch.tensor((), dtype=torch.int32) >>> tensor.new_ones((2, 3)) tensor([[ 1, 1, 1], [ 1, 1, 1]], dtype=torch.int32)
- size (int...) – a list, tuple, or
-
new_zeros(size, dtype=None, device=None, requires_grad=False) → Tensor¶ Returns a Tensor of size
sizefilled with0. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.Parameters: - size (int...) – a list, tuple, or
torch.Sizeof integers defining the shape of the output tensor. - dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor. - device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor. - requires_grad (bool, optional) – If autograd should record operations on the
returned tensor. Default:
False.
Example:
>>> tensor = torch.tensor((), dtype=torch.float64) >>> tensor.new_zeros((2, 3)) tensor([[ 0., 0., 0.], [ 0., 0., 0.]], dtype=torch.float64)
- size (int...) – a list, tuple, or
-
is_cuda¶ Is
Trueif the Tensor is stored on the GPU,Falseotherwise.
-
device¶ Is the
torch.devicewhere this Tensor is.
-
abs() → Tensor¶ See
torch.abs()
-
acos() → Tensor¶ See
torch.acos()
-
add(value) → Tensor¶ add(value=1, other) -> Tensor
See
torch.add()
-
addbmm(beta=1, mat, alpha=1, batch1, batch2) → Tensor¶ See
torch.addbmm()
-
addcdiv(value=1, tensor1, tensor2) → Tensor¶ See
torch.addcdiv()
-
addcmul(value=1, tensor1, tensor2) → Tensor¶ See
torch.addcmul()
-
addmm(beta=1, mat, alpha=1, mat1, mat2) → Tensor¶ See
torch.addmm()
-
addmv(beta=1, tensor, alpha=1, mat, vec) → Tensor¶ See
torch.addmv()
-
addr(beta=1, alpha=1, vec1, vec2) → Tensor¶ See
torch.addr()
-
allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor¶ See
torch.allclose()
-
apply_(callable) → Tensor¶ Applies the function
callableto each element in the tensor, replacing each element with the value returned bycallable.Note
This function only works with CPU tensors and should not be used in code sections that require high performance.
-
argmax(dim=None, keepdim=False)[source]¶ See
torch.argmax()
-
argmin(dim=None, keepdim=False)[source]¶ See
torch.argmin()
-
asin() → Tensor¶ See
torch.asin()
-
atan() → Tensor¶ See
torch.atan()
-
atan2(other) → Tensor¶ See
torch.atan2()
-
baddbmm(beta=1, alpha=1, batch1, batch2) → Tensor¶ See
torch.baddbmm()
-
bernoulli(*, generator=None) → Tensor¶ Returns a result tensor where each \(\texttt{result[i]}\) is independently sampled from \(\text{Bernoulli}(\texttt{self[i]})\).
selfmust have floating pointdtype, and the result will have the samedtype.
-
bernoulli_()¶ -
bernoulli_(p=0.5, *, generator=None) → Tensor Fills each location of
selfwith an independent sample from \(\text{Bernoulli}(\texttt{p})\).selfcan have integraldtype.
-
bernoulli_(p_tensor, *, generator=None) → Tensor p_tensorshould be a tensor containing probabilities to be used for drawing the binary random number.The \(\text{i}^{th}\) element of
selftensor will be set to a value sampled from \(\text{Bernoulli}(\texttt{p\_tensor[i]})\).selfcan have integraldtype, but :attr`p_tensor` must have floating pointdtype.
See also
bernoulli()andtorch.bernoulli()-
-
bmm(batch2) → Tensor¶ See
torch.bmm()
-
btrifact(info=None, pivot=True)[source]¶ See
torch.btrifact()
-
btrifact_with_info(pivot=True) -> (Tensor, Tensor, Tensor)¶
-
btrisolve(LU_data, LU_pivots) → Tensor¶
-
cauchy_(median=0, sigma=1, *, generator=None) → Tensor¶ Fills the tensor with numbers drawn from the Cauchy distribution:
\[f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2}\]
-
ceil() → Tensor¶ See
torch.ceil()
-
cholesky(upper=False) → Tensor¶ See
torch.cholesky()
-
chunk(chunks, dim=0) → List of Tensors¶ See
torch.chunk()
-
clamp(min, max) → Tensor¶ See
torch.clamp()
-
clone() → Tensor¶ Returns a copy of the
selftensor. The copy has the same size and data type asself.Note
Unlike copy_(), this function is recorded in the computation graph. Gradients propagating to the cloned tensor will propagate to the original tensor.
-
contiguous() → Tensor¶ Returns a contiguous tensor containing the same data as
selftensor. Ifselftensor is contiguous, this function returns theselftensor.
-
copy_(src, non_blocking=False) → Tensor¶ Copies the elements from
srcintoselftensor and returnsself.The
srctensor must be broadcastable with theselftensor. It may be of a different data type or reside on a different device.Parameters:
-
cos() → Tensor¶ See
torch.cos()
-
cosh() → Tensor¶ See
torch.cosh()
-
cpu() → Tensor¶ Returns a copy of this object in CPU memory.
If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned.
-
cross(other, dim=-1) → Tensor¶ See
torch.cross()
-
cuda(device=None, non_blocking=False) → Tensor¶ Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
Parameters: - device (
torch.device) – The destination GPU device. Defaults to the current CUDA device. - non_blocking (bool) – If
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default:False.
- device (
-
cumprod(dim, dtype=None) → Tensor¶ See
torch.cumprod()
-
cumsum(dim, dtype=None) → Tensor¶ See
torch.cumsum()
-
data_ptr() → int¶ Returns the address of the first element of
selftensor.
-
det() → Tensor¶ See
torch.det()
-
diag(diagonal=0) → Tensor¶ See
torch.diag()
-
diag_embed(offset=0, dim1=-2, dim2=-1) → Tensor¶
-
dim() → int¶ Returns the number of dimensions of
selftensor.
-
dist(other, p=2) → Tensor¶ See
torch.dist()
-
div(value) → Tensor¶ See
torch.div()
-
dot(tensor2) → Tensor¶ See
torch.dot()
-
eig(eigenvectors=False) -> (Tensor, Tensor)¶ See
torch.eig()
-
element_size() → int¶ Returns the size in bytes of an individual element.
Example:
>>> torch.tensor([]).element_size() 4 >>> torch.tensor([], dtype=torch.uint8).element_size() 1
-
eq(other) → Tensor¶ See
torch.eq()
-
equal(other) → bool¶ See
torch.equal()
-
erf() → Tensor¶ See
torch.erf()
-
erfc() → Tensor¶ See
torch.erfc()
-
erfinv() → Tensor¶ See
torch.erfinv()
-
exp() → Tensor¶ See
torch.exp()
-
expm1() → Tensor¶ See
torch.expm1()
-
expand(*sizes) → Tensor¶ Returns a new view of the
selftensor with singleton dimensions expanded to a larger size.Passing -1 as the size for a dimension means not changing the size of that dimension.
Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1.
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the
strideto 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory.Parameters: *sizes (torch.Size or int...) – the desired expanded size Example:
>>> x = torch.tensor([[1], [2], [3]]) >>> x.size() torch.Size([3, 1]) >>> x.expand(3, 4) tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]]) >>> x.expand(-1, 4) # -1 means not changing the size of that dimension tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]])
-
expand_as(other) → Tensor¶ Expand this tensor to the same size as
other.self.expand_as(other)is equivalent toself.expand(other.size()).Please see
expand()for more information aboutexpand.Parameters: other ( torch.Tensor) – The result tensor has the same size asother.
-
exponential_(lambd=1, *, generator=None) → Tensor¶ Fills
selftensor with elements drawn from the exponential distribution:\[f(x) = \lambda e^{-\lambda x}\]
-
fill_(value) → Tensor¶ Fills
selftensor with the specified value.
-
flatten(input, start_dim=0, end_dim=-1) → Tensor¶ see
torch.flatten()
-
flip(dims) → Tensor¶ See
torch.flip()
-
floor() → Tensor¶ See
torch.floor()
-
fmod(divisor) → Tensor¶ See
torch.fmod()
-
frac() → Tensor¶ See
torch.frac()
-
gather(dim, index) → Tensor¶ See
torch.gather()
-
ge(other) → Tensor¶ See
torch.ge()
-
gels(A) → Tensor¶ See
torch.gels()
-
geometric_(p, *, generator=None) → Tensor¶ Fills
selftensor with elements drawn from the geometric distribution:\[f(X=k) = (1 - p)^{k - 1} p\]
-
geqrf() -> (Tensor, Tensor)¶ See
torch.geqrf()
-
ger(vec2) → Tensor¶ See
torch.ger()
-
gesv(A) → Tensor, Tensor¶ See
torch.gesv()
-
get_device() -> Device ordinal (Integer)¶ For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown.
Example:
>>> x = torch.randn(3, 4, 5, device='cuda:0') >>> x.get_device() 0 >>> x.cpu().get_device() # RuntimeError: get_device is not implemented for type torch.FloatTensor
-
gt(other) → Tensor¶ See
torch.gt()
-
histc(bins=100, min=0, max=0) → Tensor¶ See
torch.histc()
-
index_add_(dim, index, tensor) → Tensor¶ Accumulate the elements of
tensorinto theselftensor by adding to the indices in the order given inindex. For example, ifdim == 0andindex[i] == j, then theith row oftensoris added to thejth row ofself.The
dimth dimension oftensormust have the same size as the length ofindex(which must be a vector), and all other dimensions must matchself, or an error will be raised.Note
When using the CUDA backend, this operation may induce nondeterministic behaviour that is not easily switched off. Please see the notes on Reproducibility for background.
Parameters: Example:
>>> x = torch.ones(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_add_(0, index, t) tensor([[ 2., 3., 4.], [ 1., 1., 1.], [ 8., 9., 10.], [ 1., 1., 1.], [ 5., 6., 7.]])
-
index_copy_(dim, index, tensor) → Tensor¶ Copies the elements of
tensorinto theselftensor by selecting the indices in the order given inindex. For example, ifdim == 0andindex[i] == j, then theith row oftensoris copied to thejth row ofself.The
dimth dimension oftensormust have the same size as the length ofindex(which must be a vector), and all other dimensions must matchself, or an error will be raised.Parameters: Example:
>>> x = torch.zeros(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_copy_(0, index, t) tensor([[ 1., 2., 3.], [ 0., 0., 0.], [ 7., 8., 9.], [ 0., 0., 0.], [ 4., 5., 6.]])
-
index_fill_(dim, index, val) → Tensor¶ Fills the elements of the
selftensor with valuevalby selecting the indices in the order given inindex.Parameters: - Example::
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 2]) >>> x.index_fill_(1, index, -1) tensor([[-1., 2., -1.], [-1., 5., -1.], [-1., 8., -1.]])
-
index_put_(indices, value, accumulate=False) → Tensor¶ Puts values from the tensor
valueinto the tensorselfusing the indices specified inindices(which is a tuple of Tensors). The expressiontensor.index_put_(indices, value)is equivalent totensor[indices] = value. Returnsself.If
accumulateisTrue, the elements intensorare added toself. If accumulate isFalse, the behavior is undefined if indices contain duplicate elements.Parameters:
-
index_select(dim, index) → Tensor¶
-
inverse() → Tensor¶ See
torch.inverse()
-
is_contiguous() → bool¶ Returns True if
selftensor is contiguous in memory in C order.
-
is_set_to(tensor) → bool¶ Returns True if this object refers to the same
THTensorobject from the Torch C API as the given tensor.
-
is_signed()¶
-
item() → number¶ Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see
tolist().This operation is not differentiable.
Example:
>>> x = torch.tensor([1.0]) >>> x.item() 1.0
-
kthvalue(k, dim=None, keepdim=False) -> (Tensor, LongTensor)¶ See
torch.kthvalue()
-
le(other) → Tensor¶ See
torch.le()
-
lerp(start, end, weight) → Tensor¶ See
torch.lerp()
-
log() → Tensor¶ See
torch.log()
-
logdet() → Tensor¶ See
torch.logdet()
-
log10() → Tensor¶ See
torch.log10()
-
log1p() → Tensor¶ See
torch.log1p()
-
log2() → Tensor¶ See
torch.log2()
-
log_normal_(mean=1, std=2, *, generator=None)¶ Fills
selftensor with numbers samples from the log-normal distribution parameterized by the given mean \(\mu\) and standard deviation \(\sigma\). Note thatmeanandstdare the mean and standard deviation of the underlying normal distribution, and not of the returned distribution:\[f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}}\]
-
logsumexp(dim, keepdim=False) → Tensor¶
-
lt(other) → Tensor¶ See
torch.lt()
-
map_(tensor, callable)¶ Applies
callablefor each element inselftensor and the giventensorand stores the results inselftensor.selftensor and the giventensormust be broadcastable.The
callableshould have the signature:def callable(a, b) -> number
-
masked_scatter_(mask, source)¶ Copies elements from
sourceintoselftensor at positions where themaskis one. The shape ofmaskmust be broadcastable with the shape of the underlying tensor. Thesourceshould have at least as many elements as the number of ones inmaskParameters: - mask (ByteTensor) – the binary mask
- source (Tensor) – the tensor to copy from
Note
The
maskoperates on theselftensor, not on the givensourcetensor.
-
masked_fill_(mask, value)¶ Fills elements of
selftensor withvaluewheremaskis one. The shape ofmaskmust be broadcastable with the shape of the underlying tensor.Parameters: - mask (ByteTensor) – the binary mask
- value (float) – the value to fill in with
-
masked_select(mask) → Tensor¶
-
matmul(tensor2) → Tensor¶ See
torch.matmul()
-
matrix_power(n) → Tensor¶
-
max(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)¶ See
torch.max()
-
mean(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)¶ See
torch.mean()
-
median(dim=None, keepdim=False) -> (Tensor, LongTensor)¶ See
torch.median()
-
min(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)¶ See
torch.min()
-
mm(mat2) → Tensor¶ See
torch.mm()
-
mode(dim=None, keepdim=False) -> (Tensor, LongTensor)¶ See
torch.mode()
-
mul(value) → Tensor¶ See
torch.mul()
-
multinomial(num_samples, replacement=False, *, generator=None) → Tensor¶
-
mv(vec) → Tensor¶ See
torch.mv()
-
mvlgamma(p) → Tensor¶ See
torch.mvlgamma()
-
mvlgamma_(p) → Tensor¶ In-place version of
mvlgamma()
-
narrow(dimension, start, length) → Tensor¶ See
torch.narrow()Example:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> x.narrow(0, 0, 2) tensor([[ 1, 2, 3], [ 4, 5, 6]]) >>> x.narrow(1, 1, 2) tensor([[ 2, 3], [ 5, 6], [ 8, 9]])
-
ne(other) → Tensor¶ See
torch.ne()
-
neg() → Tensor¶ See
torch.neg()
-
nonzero() → LongTensor¶ See
torch.nonzero()
-
normal_(mean=0, std=1, *, generator=None) → Tensor¶ Fills
selftensor with elements samples from the normal distribution parameterized bymeanandstd.
-
numel() → int¶ See
torch.numel()
-
numpy() → numpy.ndarray¶ Returns
selftensor as a NumPyndarray. This tensor and the returnedndarrayshare the same underlying storage. Changes toselftensor will be reflected in thendarrayand vice versa.
-
orgqr(input2) → Tensor¶ See
torch.orgqr()
-
ormqr(input2, input3, left=True, transpose=False) → Tensor¶ See
torch.ormqr()
-
permute(*dims) → Tensor¶ Permute the dimensions of this tensor.
Parameters: *dims (int...) – The desired ordering of dimensions Example
>>> x = torch.randn(2, 3, 5) >>> x.size() torch.Size([2, 3, 5]) >>> x.permute(2, 0, 1).size() torch.Size([5, 2, 3])
-
pin_memory()¶
-
pinverse() → Tensor¶ See
torch.pinverse()
-
potrf(upper=True)[source]¶ See
torch.cholesky()
-
potri(upper=True) → Tensor¶ See
torch.potri()
-
potrs(input2, upper=True) → Tensor¶ See
torch.potrs()
-
pow(exponent) → Tensor¶ See
torch.pow()
-
prod(dim=None, keepdim=False, dtype=None) → Tensor¶ See
torch.prod()
-
pstrf(upper=True, tol=-1) -> (Tensor, IntTensor)¶ See
torch.pstrf()
-
put_(indices, tensor, accumulate=False) → Tensor¶ Copies the elements from
tensorinto the positions specified by indices. For the purpose of indexing, theselftensor is treated as if it were a 1-D tensor.If
accumulateisTrue, the elements intensorare added toself. If accumulate isFalse, the behavior is undefined if indices contain duplicate elements.Parameters: Example:
>>> src = torch.tensor([[4, 3, 5], [6, 7, 8]]) >>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10])) tensor([[ 4, 9, 5], [ 10, 7, 8]])
-
qr() -> (Tensor, Tensor)¶ See
torch.qr()
-
random_(from=0, to=None, *, generator=None) → Tensor¶ Fills
selftensor with numbers sampled from the discrete uniform distribution over[from, to - 1]. If not specified, the values are usually only bounded byselftensor’s data type. However, for floating point types, if unspecified, range will be[0, 2^mantissa]to ensure that every value is representable. For example, torch.tensor(1, dtype=torch.double).random_() will be uniform in[0, 2^53].
-
reciprocal() → Tensor¶
-
reciprocal_() → Tensor¶ In-place version of
reciprocal()
-
remainder(divisor) → Tensor¶
-
remainder_(divisor) → Tensor¶ In-place version of
remainder()
-
renorm(p, dim, maxnorm) → Tensor¶ See
torch.renorm()
-
repeat(*sizes) → Tensor¶ Repeats this tensor along the specified dimensions.
Unlike
expand(), this function copies the tensor’s data.Warning
torch.repeat()behaves differently from numpy.repeat, but is more similar to numpy.tile.Parameters: sizes (torch.Size or int...) – The number of times to repeat this tensor along each dimension Example:
>>> x = torch.tensor([1, 2, 3]) >>> x.repeat(4, 2) tensor([[ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3]]) >>> x.repeat(4, 2, 1).size() torch.Size([4, 2, 3])
-
requires_grad_(requires_grad=True) → Tensor¶ Change if autograd should record operations on this tensor: sets this tensor’s
requires_gradattribute in-place. Returns this tensor.require_grad_()’s main use case is to tell autograd to begin recording operations on a Tensortensor. Iftensorhasrequires_grad=False(because it was obtained through a DataLoader, or required preprocessing or initialization),tensor.requires_grad_()makes it so that autograd will begin to record operations ontensor.Parameters: requires_grad (bool) – If autograd should record operations on this tensor. Default: True.Example:
>>> # Let's say we want to preprocess some saved weights and use >>> # the result as new weights. >>> saved_weights = [0.1, 0.2, 0.3, 0.25] >>> loaded_weights = torch.tensor(saved_weights) >>> weights = preprocess(loaded_weights) # some function >>> weights tensor([-0.5503, 0.4926, -2.1158, -0.8303]) >>> # Now, start to record operations done to weights >>> weights.requires_grad_() >>> out = weights.pow(2).sum() >>> out.backward() >>> weights.grad tensor([-1.1007, 0.9853, -4.2316, -1.6606])
-
reshape(*shape) → Tensor¶ Returns a tensor with the same data and number of elements as
selfbut with the specified shape. This method returns a view ifshapeis compatible with the current shape. Seetorch.Tensor.view()on when it is possible to return a view.See
torch.reshape()Parameters: shape (tuple of python:ints or int...) – the desired shape
-
reshape_as(other) → Tensor¶ Returns this tensor as the same shape as
other.self.reshape_as(other)is equivalent toself.reshape(other.sizes()). This method returns a view ifother.sizes()is compatible with the current shape. Seetorch.Tensor.view()on when it is possible to return a view.Please see
reshape()for more information aboutreshape.Parameters: other ( torch.Tensor) – The result tensor has the same shape asother.
-
resize_(*sizes) → Tensor¶ Resizes
selftensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized.Warning
This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use
view(), which checks for contiguity, orreshape(), which copies data if needed. To change the size in-place with custom strides, seeset_().Parameters: sizes (torch.Size or int...) – the desired size Example:
>>> x = torch.tensor([[1, 2], [3, 4], [5, 6]]) >>> x.resize_(2, 2) tensor([[ 1, 2], [ 3, 4]])
-
resize_as_(tensor) → Tensor¶ Resizes the
selftensor to be the same size as the specifiedtensor. This is equivalent toself.resize_(tensor.size()).
-
round() → Tensor¶ See
torch.round()
-
rsqrt() → Tensor¶ See
torch.rsqrt()
-
scatter_(dim, index, src) → Tensor¶ Writes all values from the tensor
srcintoselfat the indices specified in theindextensor. For each value insrc, its output index is specified by its index insrcfordimension != dimand by the corresponding value inindexfordimension = dim.For a 3-D tensor,
selfis updated as:self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2
This is the reverse operation of the manner described in
gather().self,indexandsrc(if it is a Tensor) should have same number of dimensions. It is also required thatindex.size(d) <= src.size(d)for all dimensionsd, and thatindex.size(d) <= self.size(d)for all dimensionsd != dim.Moreover, as for
gather(), the values ofindexmust be between0andself.size(dim) - 1inclusive, and all values in a row along the specified dimensiondimmust be unique.Parameters: Example:
>>> x = torch.rand(2, 5) >>> x tensor([[ 0.3992, 0.2908, 0.9044, 0.4850, 0.6004], [ 0.5735, 0.9006, 0.6797, 0.4152, 0.1732]]) >>> torch.zeros(3, 5).scatter_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x) tensor([[ 0.3992, 0.9006, 0.6797, 0.4850, 0.6004], [ 0.0000, 0.2908, 0.0000, 0.4152, 0.0000], [ 0.5735, 0.0000, 0.9044, 0.0000, 0.1732]]) >>> z = torch.zeros(2, 4).scatter_(1, torch.tensor([[2], [3]]), 1.23) >>> z tensor([[ 0.0000, 0.0000, 1.2300, 0.0000], [ 0.0000, 0.0000, 0.0000, 1.2300]])
-
scatter_add_(dim, index, other) → Tensor¶ Adds all values from the tensor
otherintoselfat the indices specified in theindextensor in a similar fashion asscatter_(). For each value inother, it is added to an index inselfwhich is specified by its index inotherfordimension != dimand by the corresponding value inindexfordimension = dim.For a 3-D tensor,
selfis updated as:self[index[i][j][k]][j][k] += other[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] += other[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] += other[i][j][k] # if dim == 2
self,indexandothershould have same number of dimensions. It is also required thatindex.size(d) <= other.size(d)for all dimensionsd, and thatindex.size(d) <= self.size(d)for all dimensionsd != dim.Moreover, as for
gather(), the values ofindexmust be between0andself.size(dim) - 1inclusive, and all values in a row along the specified dimensiondimmust be unique.Note
When using the CUDA backend, this operation may induce nondeterministic behaviour that is not easily switched off. Please see the notes on Reproducibility for background.
Parameters: Example:
>>> x = torch.rand(2, 5) >>> x tensor([[0.7404, 0.0427, 0.6480, 0.3806, 0.8328], [0.7953, 0.2009, 0.9154, 0.6782, 0.9620]]) >>> torch.ones(3, 5).scatter_add_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x) tensor([[1.7404, 1.2009, 1.9154, 1.3806, 1.8328], [1.0000, 1.0427, 1.0000, 1.6782, 1.0000], [1.7953, 1.0000, 1.6480, 1.0000, 1.9620]])
-
select(dim, index) → Tensor¶ Slices the
selftensor along the selected dimension at the given index. This function returns a tensor with the given dimension removed.Parameters: Note
select()is equivalent to slicing. For example,tensor.select(0, index)is equivalent totensor[index]andtensor.select(2, index)is equivalent totensor[:,:,index].
-
set_(source=None, storage_offset=0, size=None, stride=None) → Tensor¶ Sets the underlying storage, size, and strides. If
sourceis a tensor,selftensor will share the same storage and have the same size and strides assource. Changes to elements in one tensor will be reflected in the other.If
sourceis aStorage, the method sets the underlying storage, offset, size, and stride.Parameters:
Moves the underlying storage to shared memory.
This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.
-
sigmoid() → Tensor¶ See
torch.sigmoid()
-
sign() → Tensor¶ See
torch.sign()
-
sin() → Tensor¶ See
torch.sin()
-
sinh() → Tensor¶ See
torch.sinh()
-
size() → torch.Size¶ Returns the size of the
selftensor. The returned value is a subclass oftuple.Example:
>>> torch.empty(3, 4, 5).size() torch.Size([3, 4, 5])
-
slogdet() -> (Tensor, Tensor)¶ See
torch.slogdet()
-
sort(dim=None, descending=False) -> (Tensor, LongTensor)¶ See
torch.sort()
-
split(split_size, dim=0)[source]¶ See
torch.split()
-
sparse_mask(input, mask) → Tensor¶ Returns a new SparseTensor with values from Tensor
inputfiltered by indices ofmaskand values are ignored.inputandmaskmust have the same shape.Parameters: - input (Tensor) – an input Tensor
- mask (SparseTensor) – a SparseTensor which we filter
inputbased on its indices
Example:
>>> nnz = 5 >>> dims = [5, 5, 2, 2] >>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)), torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz) >>> V = torch.randn(nnz, dims[2], dims[3]) >>> size = torch.Size(dims) >>> S = torch.sparse_coo_tensor(I, V, size).coalesce() >>> D = torch.randn(dims) >>> D.sparse_mask(S) tensor(indices=tensor([[0, 0, 0, 2], [0, 1, 4, 3]]), values=tensor([[[ 1.6550, 0.2397], [-0.1611, -0.0779]], [[ 0.2326, -1.0558], [ 1.4711, 1.9678]], [[-0.5138, -0.0411], [ 1.9417, 0.5158]], [[ 0.0793, 0.0036], [-0.2569, -0.1055]]]), size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)
-
sqrt() → Tensor¶ See
torch.sqrt()
-
squeeze(dim=None) → Tensor¶ See
torch.squeeze()
-
std(dim=None, unbiased=True, keepdim=False) → Tensor¶ See
torch.std()
-
storage() → torch.Storage¶ Returns the underlying storage
-
storage_offset() → int¶ Returns
selftensor’s offset in the underlying storage in terms of number of storage elements (not bytes).Example:
>>> x = torch.tensor([1, 2, 3, 4, 5]) >>> x.storage_offset() 0 >>> x[3:].storage_offset() 3
-
storage_type()¶
-
stride(dim) → tuple or int¶ Returns the stride of
selftensor.Stride is the jump necessary to go from one element to the next one in the specified dimension
dim. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimensiondim.Parameters: dim (int, optional) – the desired dimension in which stride is required Example:
>>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) >>> x.stride() (5, 1) >>>x.stride(0) 5 >>> x.stride(-1) 1
-
sub(value, other) → Tensor¶ Subtracts a scalar or tensor from
selftensor. If bothvalueandotherare specified, each element ofotheris scaled byvaluebefore being used.When
otheris a tensor, the shape ofothermust be broadcastable with the shape of the underlying tensor.
-
sum(dim=None, keepdim=False, dtype=None) → Tensor¶ See
torch.sum()
-
svd(some=True, compute_uv=True) -> (Tensor, Tensor, Tensor)¶ See
torch.svd()
-
symeig(eigenvectors=False, upper=True) -> (Tensor, Tensor)¶ See
torch.symeig()
-
to(*args, **kwargs) → Tensor¶ Performs Tensor dtype and/or device conversion. A
torch.dtypeandtorch.deviceare inferred from the arguments ofself.to(*args, **kwargs).Note
If the
selfTensor already has the correcttorch.dtypeandtorch.device, thenselfis returned. Otherwise, the returned tensor is a copy ofselfwith the desiredtorch.dtypeandtorch.device.Here are the ways to call
to:-
to(dtype, non_blocking=False, copy=False) → Tensor Returns a Tensor with the specified
dtype
-
to(device=None, dtype=None, non_blocking=False, copy=False) → Tensor Returns a Tensor with the specified
deviceand (optional)dtype. IfdtypeisNoneit is inferred to beself.dtype. Whennon_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Whencopyis set, a new Tensor is created even when the Tensor already matches the desired conversion.
-
to(other, non_blocking=False, copy=False) → Tensor Returns a Tensor with same
torch.dtypeandtorch.deviceas the Tensorother. Whennon_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Whencopyis set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example:
>>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu >>> tensor.to(torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64) >>> cuda0 = torch.device('cuda:0') >>> tensor.to(cuda0) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], device='cuda:0') >>> tensor.to(cuda0, dtype=torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') >>> other = torch.randn((), dtype=torch.float64, device=cuda0) >>> tensor.to(other, non_blocking=True) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
-
-
take(indices) → Tensor¶ See
torch.take()
-
tan()¶
-
tanh() → Tensor¶ See
torch.tanh()
-
tolist()¶ ” tolist() -> list or number
Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with
item(). Tensors are automatically moved to the CPU first if necessary.This operation is not differentiable.
Examples:
>>> a = torch.randn(2, 2) >>> a.tolist() [[0.012766935862600803, 0.5415473580360413], [-0.08909505605697632, 0.7729271650314331]] >>> a[0,0].tolist() 0.012766935862600803
-
topk(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)¶ See
torch.topk()
-
to_sparse(sparseDims) → Tensor¶ Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format. :param sparseDims: the number of sparse dimensions to include in the new sparse tensor :type sparseDims: int, optional
- Example::
>>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]]) >>> d tensor([[ 0, 0, 0], [ 9, 0, 10], [ 0, 0, 0]]) >>> d.to_sparse() tensor(indices=tensor([[1, 1], [0, 2]]), values=tensor([ 9, 10]), size=(3, 3), nnz=2, layout=torch.sparse_coo) >>> d.to_sparse(1) tensor(indices=tensor([[1]]), values=tensor([[ 9, 0, 10]]), size=(3, 3), nnz=1, layout=torch.sparse_coo)
-
trace() → Tensor¶ See
torch.trace()
-
transpose(dim0, dim1) → Tensor¶
-
transpose_(dim0, dim1) → Tensor¶ In-place version of
transpose()
-
tril(k=0) → Tensor¶ See
torch.tril()
-
triu(k=0) → Tensor¶ See
torch.triu()
-
trtrs(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)¶ See
torch.trtrs()
-
trunc() → Tensor¶ See
torch.trunc()
-
type(dtype=None, non_blocking=False, **kwargs) → str or Tensor¶ Returns the type if dtype is not provided, else casts this object to the specified type.
If this is already of the correct type, no copy is performed and the original object is returned.
Parameters: - dtype (type or string) – The desired type
- non_blocking (bool) – If
True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key
asyncin place of thenon_blockingargument. Theasyncarg is deprecated.
-
type_as(tensor) → Tensor¶ Returns this tensor cast to the type of the given tensor.
This is a no-op if the tensor is already of the correct type. This is equivalent to:
self.type(tensor.type())
- Params:
- tensor (Tensor): the tensor which has the desired type
-
unfold(dim, size, step) → Tensor¶ Returns a tensor which contains all slices of size
sizefromselftensor in the dimensiondim.Step between two slices is given by
step.If sizedim is the size of dimension dim for
self, the size of dimensiondimin the returned tensor will be (sizedim - size) / step + 1.An additional dimension of size size is appended in the returned tensor.
Parameters: Example:
>>> x = torch.arange(1., 8) >>> x tensor([ 1., 2., 3., 4., 5., 6., 7.]) >>> x.unfold(0, 2, 1) tensor([[ 1., 2.], [ 2., 3.], [ 3., 4.], [ 4., 5.], [ 5., 6.], [ 6., 7.]]) >>> x.unfold(0, 2, 2) tensor([[ 1., 2.], [ 3., 4.], [ 5., 6.]])
-
uniform_(from=0, to=1) → Tensor¶ Fills
selftensor with numbers sampled from the continuous uniform distribution:\[P(x) = \dfrac{1}{\text{to} - \text{from}} \]
-
unique(sorted=False, return_inverse=False, dim=None)[source]¶ Returns the unique scalar elements of the tensor as a 1-D tensor.
See
torch.unique()
-
unsqueeze(dim) → Tensor¶
-
unsqueeze_(dim) → Tensor¶ In-place version of
unsqueeze()
-
var(dim=None, unbiased=True, keepdim=False) → Tensor¶ See
torch.var()
-
view(*shape) → Tensor¶ Returns a new tensor with the same data as the
selftensor but of a differentshape.The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions \(d, d+1, \dots, d+k\) that satisfy the following contiguity-like condition that \(\forall i = 0, \dots, k-1\),
\[\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1]\]Otherwise,
contiguous()needs to be called before the tensor can be viewed. See also:reshape(), which returns a view if the shapes are compatible, and copies (equivalent to callingcontiguous()) otherwise.Parameters: shape (torch.Size or int...) – the desired size Example:
>>> x = torch.randn(4, 4) >>> x.size() torch.Size([4, 4]) >>> y = x.view(16) >>> y.size() torch.Size([16]) >>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions >>> z.size() torch.Size([2, 8])
-
view_as(other) → Tensor¶ View this tensor as the same size as
other.self.view_as(other)is equivalent toself.view(other.size()).Please see
view()for more information aboutview.Parameters: other ( torch.Tensor) – The result tensor has the same size asother.
-
zero_() → Tensor¶ Fills
selftensor with zeros.
- To create a tensor with pre-existing data, use
-
class
torch.ByteTensor¶ The following methods are unique to
torch.ByteTensor.-
all()¶ -
all() → bool
Returns True if all elements in the tensor are non-zero, False otherwise.
Example:
>>> a = torch.randn(1, 3).byte() % 2 >>> a tensor([[1, 0, 0]], dtype=torch.uint8) >>> a.all() tensor(0, dtype=torch.uint8)
-
all(dim, keepdim=False, out=None) → Tensor
Returns True if all elements in each row of the tensor in the given dimension
dimare non-zero, False otherwise.If
keepdimisTrue, the output tensor is of the same size asinputexcept in the dimensiondimwhere it is of size 1. Otherwise,dimis squeezed (seetorch.squeeze()), resulting in the output tensor having 1 fewer dimension thaninput.Parameters: Example:
>>> a = torch.randn(4, 2).byte() % 2 >>> a tensor([[0, 0], [0, 0], [0, 1], [1, 1]], dtype=torch.uint8) >>> a.all(dim=1) tensor([0, 0, 0, 1], dtype=torch.uint8)
-
-
any()¶ -
any() → bool
Returns True if any elements in the tensor are non-zero, False otherwise.
Example:
>>> a = torch.randn(1, 3).byte() % 2 >>> a tensor([[0, 0, 1]], dtype=torch.uint8) >>> a.any() tensor(1, dtype=torch.uint8)
-
any(dim, keepdim=False, out=None) → Tensor
Returns True if any elements in each row of the tensor in the given dimension
dimare non-zero, False otherwise.If
keepdimisTrue, the output tensor is of the same size asinputexcept in the dimensiondimwhere it is of size 1. Otherwise,dimis squeezed (seetorch.squeeze()), resulting in the output tensor having 1 fewer dimension thaninput.Parameters: Example:
>>> a = torch.randn(4, 2).byte() % 2 >>> a tensor([[1, 0], [0, 0], [0, 1], [0, 0]], dtype=torch.uint8) >>> a.any(dim=1) tensor([1, 0, 1, 0], dtype=torch.uint8)
-
-