3.1.23.6. unit_scaling.parameter.Tensor
- class unit_scaling.parameter.Tensor
- H
Returns a view of a matrix (2-D tensor) conjugated and transposed.
x.His equivalent tox.transpose(0, 1).conj()for complex matrices andx.transpose(0, 1)for real matrices.See also
mH: An attribute that also works on batches of matrices.
- T
Returns a view of this tensor with its dimensions reversed.
If
nis the number of dimensions inx,x.Tis equivalent tox.permute(n-1, n-2, ..., 0).Warning
The use of
Tensor.T()on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. ConsidermTto transpose batches of matrices or x.permute(*torch.arange(x.ndim - 1, -1, -1)) to reverse the dimensions of a tensor.
- abs() Tensor
See
torch.abs()
- absolute_() Tensor
In-place version of
absolute()Alias forabs_()
- acos() Tensor
See
torch.acos()
- acosh() Tensor
See
torch.acosh()
- add(other, *, alpha=1) Tensor
Add a scalar or tensor to
selftensor. If bothalphaandotherare specified, each element ofotheris scaled byalphabefore being used.When
otheris a tensor, the shape ofothermust be broadcastable with the shape of the underlying tensorSee
torch.add()
- addbmm(batch1, batch2, *, beta=1, alpha=1) Tensor
See
torch.addbmm()
- addcdiv(tensor1, tensor2, *, value=1) Tensor
See
torch.addcdiv()
- addcmul(tensor1, tensor2, *, value=1) Tensor
See
torch.addcmul()
- addmm(mat1, mat2, *, beta=1, alpha=1) Tensor
See
torch.addmm()
- addmv(mat, vec, *, beta=1, alpha=1) Tensor
See
torch.addmv()
- addr(vec1, vec2, *, beta=1, alpha=1) Tensor
See
torch.addr()
- align_as(other) Tensor
Permutes the dimensions of the
selftensor to match the dimension order in theothertensor, adding size-one dims for any new names.This operation is useful for explicit broadcasting by names (see examples).
All of the dims of
selfmust be named in order to use this method. The resulting tensor is a view on the original tensor.All dimension names of
selfmust be present inother.names.othermay contain named dimensions that are not inself.names; the output tensor has a size-one dimension for each of those new names.To align a tensor to a specific order, use
align_to().Examples:
# Example 1: Applying a mask >>> mask = torch.randint(2, [127, 128], dtype=torch.bool).refine_names('W', 'H') >>> imgs = torch.randn(32, 128, 127, 3, names=('N', 'H', 'W', 'C')) >>> imgs.masked_fill_(mask.align_as(imgs), 0) # Example 2: Applying a per-channel-scale >>> def scale_channels(input, scale): >>> scale = scale.refine_names('C') >>> return input * scale.align_as(input) >>> num_channels = 3 >>> scale = torch.randn(num_channels, names=('C',)) >>> imgs = torch.rand(32, 128, 128, num_channels, names=('N', 'H', 'W', 'C')) >>> more_imgs = torch.rand(32, num_channels, 128, 128, names=('N', 'C', 'H', 'W')) >>> videos = torch.randn(3, num_channels, 128, 128, 128, names=('N', 'C', 'H', 'W', 'D')) # scale_channels is agnostic to the dimension order of the input >>> scale_channels(imgs, scale) >>> scale_channels(more_imgs, scale) >>> scale_channels(videos, scale)
Warning
The named tensor API is experimental and subject to change.
- align_to(*names)[source]
Permutes the dimensions of the
selftensor to match the order specified innames, adding size-one dims for any new names.All of the dims of
selfmust be named in order to use this method. The resulting tensor is a view on the original tensor.All dimension names of
selfmust be present innames.namesmay contain additional names that are not inself.names; the output tensor has a size-one dimension for each of those new names.namesmay contain up to one Ellipsis (...). The Ellipsis is expanded to be equal to all dimension names ofselfthat are not mentioned innames, in the order that they appear inself.Python 2 does not support Ellipsis but one may use a string literal instead (
'...').- Parameters:
names (iterable of str) – The desired dimension ordering of the output tensor. May contain up to one Ellipsis that is expanded to all unmentioned dim names of
self.
Examples:
>>> tensor = torch.randn(2, 2, 2, 2, 2, 2) >>> named_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F') # Move the F and E dims to the front while keeping the rest in order >>> named_tensor.align_to('F', 'E', ...)
Warning
The named tensor API is experimental and subject to change.
- all(dim=None, keepdim=False) Tensor
See
torch.all()
- allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) Tensor
See
torch.allclose()
- amax(dim=None, keepdim=False) Tensor
See
torch.amax()
- amin(dim=None, keepdim=False) Tensor
See
torch.amin()
- aminmax(*, dim=None, keepdim=False) -> (Tensor min, Tensor max)
See
torch.aminmax()
- angle() Tensor
See
torch.angle()
- any(dim=None, keepdim=False) Tensor
See
torch.any()
- apply_(callable) Tensor
Applies the function
callableto each element in the tensor, replacing each element with the value returned bycallable.Note
This function only works with CPU tensors and should not be used in code sections that require high performance.
- arccos() Tensor
See
torch.arccos()
- arccosh()
acosh() -> Tensor
See
torch.arccosh()
- arcsin() Tensor
See
torch.arcsin()
- arcsinh() Tensor
See
torch.arcsinh()
- arctan() Tensor
See
torch.arctan()
- arctan2(other) Tensor
See
torch.arctan2()
- arctanh() Tensor
See
torch.arctanh()
- argmax(dim=None, keepdim=False) LongTensor
See
torch.argmax()
- argmin(dim=None, keepdim=False) LongTensor
See
torch.argmin()
- argsort(dim=-1, descending=False) LongTensor
See
torch.argsort()
- argwhere() Tensor
See
torch.argwhere()
- as_strided_(size, stride, storage_offset=None) Tensor
In-place version of
as_strided()
- as_subclass(cls) Tensor
Makes a
clsinstance with the same data pointer asself. Changes in the output mirror changes inself, and the output stays attached to the autograd graph.clsmust be a subclass ofTensor.
- asin() Tensor
See
torch.asin()
- asinh() Tensor
See
torch.asinh()
- atan() Tensor
See
torch.atan()
- atan2(other) Tensor
See
torch.atan2()
- atanh() Tensor
See
torch.atanh()
- backward(gradient=None, retain_graph=None, create_graph=False, inputs=None)[source]
Computes the gradient of current tensor wrt graph leaves.
The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying a
gradient. It should be a tensor of matching type and shape, that represents the gradient of the differentiated function w.r.t.self.This function accumulates gradients in the leaves - you might need to zero
.gradattributes or set them toNonebefore calling it. See Default gradient layouts for details on the memory layout of accumulated gradients.Note
If you run any forward ops, create
gradient, and/or callbackwardin a user-specified CUDA stream context, see Stream semantics of backward passes.Note
When
inputsare provided and a given input is not a leaf, the current implementation will call its grad_fn (though it is not strictly needed to get this gradients). It is an implementation detail on which the user should not rely. See https://github.com/pytorch/pytorch/pull/60521#issuecomment-867061780 for more details.- Parameters:
gradient (Tensor, optional) – The gradient of the function being differentiated w.r.t.
self. This argument can be omitted ifselfis a scalar.retain_graph (bool, optional) – If
False, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value ofcreate_graph.create_graph (bool, optional) – If
True, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults toFalse.inputs (sequence of Tensor, optional) – Inputs w.r.t. which the gradient will be accumulated into
.grad. All other tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute thetensors.
- baddbmm(batch1, batch2, *, beta=1, alpha=1) Tensor
See
torch.baddbmm()
- bernoulli(*, generator=None) Tensor
Returns a result tensor where each \(\texttt{result[i]}\) is independently sampled from \(\text{Bernoulli}(\texttt{self[i]})\).
selfmust have floating pointdtype, and the result will have the samedtype.
- bernoulli_(p=0.5, *, generator=None) Tensor
Fills each location of
selfwith an independent sample from \(\text{Bernoulli}(\texttt{p})\).selfcan have integraldtype.pshould either be a scalar or tensor containing probabilities to be used for drawing the binary random number.If it is a tensor, the \(\text{i}^{th}\) element of
selftensor will be set to a value sampled from \(\text{Bernoulli}(\texttt{p\_tensor[i]})\). In this case p must have floating pointdtype.See also
bernoulli()andtorch.bernoulli()
- bfloat16(memory_format=torch.preserve_format) Tensor
self.bfloat16()is equivalent toself.to(torch.bfloat16). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- bincount(weights=None, minlength=0) Tensor
See
torch.bincount()
- bitwise_and_() Tensor
In-place version of
bitwise_and()
- bitwise_left_shift_(other) Tensor
In-place version of
bitwise_left_shift()
- bitwise_not_() Tensor
In-place version of
bitwise_not()
- bitwise_or_() Tensor
In-place version of
bitwise_or()
- bitwise_right_shift_(other) Tensor
In-place version of
bitwise_right_shift()
- bitwise_xor_() Tensor
In-place version of
bitwise_xor()
- bmm(batch2) Tensor
See
torch.bmm()
- bool(memory_format=torch.preserve_format) Tensor
self.bool()is equivalent toself.to(torch.bool). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- broadcast_to(shape) Tensor
See
torch.broadcast_to().
- byte(memory_format=torch.preserve_format) Tensor
self.byte()is equivalent toself.to(torch.uint8). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- cauchy_(median=0, sigma=1, *, generator=None) Tensor
Fills the tensor with numbers drawn from the Cauchy distribution:
\[f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2}\]Note
Sigma (\(\sigma\)) is used to denote the scale parameter in Cauchy distribution.
- cdouble(memory_format=torch.preserve_format) Tensor
self.cdouble()is equivalent toself.to(torch.complex128). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- ceil() Tensor
See
torch.ceil()
- cfloat(memory_format=torch.preserve_format) Tensor
self.cfloat()is equivalent toself.to(torch.complex64). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- chalf(memory_format=torch.preserve_format) Tensor
self.chalf()is equivalent toself.to(torch.complex32). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- char(memory_format=torch.preserve_format) Tensor
self.char()is equivalent toself.to(torch.int8). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- cholesky(upper=False) Tensor
See
torch.cholesky()
- chunk(chunks, dim=0) List of Tensors
See
torch.chunk()
- clamp(min=None, max=None) Tensor
See
torch.clamp()
- clone(*, memory_format=torch.preserve_format) Tensor
See
torch.clone()
- coalesce() Tensor
Returns a coalesced copy of
selfifselfis an uncoalesced tensor.Returns
selfifselfis a coalesced tensor.Warning
Throws an error if
selfis not a sparse COO tensor.
- col_indices() IntTensor
Returns the tensor containing the column indices of the
selftensor whenselfis a sparse CSR tensor of layoutsparse_csr. Thecol_indicestensor is strictly of shape (self.nnz()) and of typeint32orint64. When using MKL routines such as sparse matrix multiplication, it is necessary to useint32indexing in order to avoid downcasting and potentially losing information.- Example::
>>> csr = torch.eye(5,5).to_sparse_csr() >>> csr.col_indices() tensor([0, 1, 2, 3, 4], dtype=torch.int32)
- conj() Tensor
See
torch.conj()
- conj_physical_() Tensor
In-place version of
conj_physical()
- contiguous(memory_format=torch.contiguous_format) Tensor
Returns a contiguous in memory tensor containing the same data as
selftensor. Ifselftensor is already in the specified memory format, this function returns theselftensor.- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.contiguous_format.
- copy_(src, non_blocking=False) Tensor
Copies the elements from
srcintoselftensor and returnsself.The
srctensor must be broadcastable with theselftensor. It may be of a different data type or reside on a different device.
- copysign(other) Tensor
See
torch.copysign()
- copysign_(other) Tensor
In-place version of
copysign()
- corrcoef() Tensor
See
torch.corrcoef()
- cos() Tensor
See
torch.cos()
- cosh() Tensor
See
torch.cosh()
- cov(*, correction=1, fweights=None, aweights=None) Tensor
See
torch.cov()
- cpu(memory_format=torch.preserve_format) Tensor
Returns a copy of this object in CPU memory.
If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- cross(other, dim=None) Tensor
See
torch.cross()
- crow_indices() IntTensor
Returns the tensor containing the compressed row indices of the
selftensor whenselfis a sparse CSR tensor of layoutsparse_csr. Thecrow_indicestensor is strictly of shape (self.size(0) + 1) and of typeint32orint64. When using MKL routines such as sparse matrix multiplication, it is necessary to useint32indexing in order to avoid downcasting and potentially losing information.- Example::
>>> csr = torch.eye(5,5).to_sparse_csr() >>> csr.crow_indices() tensor([0, 1, 2, 3, 4, 5], dtype=torch.int32)
- cuda(device=None, non_blocking=False, memory_format=torch.preserve_format) Tensor
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters:
device (
torch.device) – The destination GPU device. Defaults to the current CUDA device.non_blocking (bool) – If
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default:False.memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- cummax(dim)
See
torch.cummax()
- cummin(dim)
See
torch.cummin()
- cumprod(dim, dtype=None) Tensor
See
torch.cumprod()
- cumsum(dim, dtype=None) Tensor
See
torch.cumsum()
- deg2rad() Tensor
See
torch.deg2rad()
- dense_dim() int
Return the number of dense dimensions in a sparse tensor
self.Note
Returns
len(self.shape)ifselfis not a sparse tensor.See also
Tensor.sparse_dim()and hybrid tensors.
- dequantize() Tensor
Given a quantized Tensor, dequantize it and return the dequantized float Tensor.
- det() Tensor
See
torch.det()
- detach()
Returns a new Tensor, detached from the current graph.
The result will never require gradient.
This method also affects forward mode AD gradients and the result will never have forward mode AD gradients.
Note
Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks.
- detach_()
Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place.
This method also affects forward mode AD gradients and the result will never have forward mode AD gradients.
- device
Is the
torch.devicewhere this Tensor is.
- diag(diagonal=0) Tensor
See
torch.diag()
- diagflat(offset=0) Tensor
See
torch.diagflat()
- diagonal(offset=0, dim1=0, dim2=1) Tensor
See
torch.diagonal()
- diff(n=1, dim=-1, prepend=None, append=None) Tensor
See
torch.diff()
- digamma() Tensor
See
torch.digamma()
- dim_order() tuple[source]
Returns a tuple of int describing the dim order or physical layout of
self.- Parameters:
None
Dim order represents how dimensions are laid out in memory, starting from the outermost to the innermost dimension.
- Example::
>>> torch.empty((2, 3, 5, 7)).dim_order() (0, 1, 2, 3) >>> torch.empty((2, 3, 5, 7), memory_format=torch.channels_last).dim_order() (0, 2, 3, 1)
Warning
The dim_order tensor API is experimental and subject to change.
- dist(other, p=2) Tensor
See
torch.dist()
- div(value, *, rounding_mode=None) Tensor
See
torch.div()
- divide(value, *, rounding_mode=None) Tensor
See
torch.divide()
- dot(other) Tensor
See
torch.dot()
- double(memory_format=torch.preserve_format) Tensor
self.double()is equivalent toself.to(torch.float64). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- dsplit(split_size_or_sections) List of Tensors
See
torch.dsplit()
- element_size() int
Returns the size in bytes of an individual element.
Example:
>>> torch.tensor([]).element_size() 4 >>> torch.tensor([], dtype=torch.uint8).element_size() 1
- eq(other) Tensor
See
torch.eq()
- equal(other) bool
See
torch.equal()
- erf() Tensor
See
torch.erf()
- erfc() Tensor
See
torch.erfc()
- erfinv() Tensor
See
torch.erfinv()
- exp() Tensor
See
torch.exp()
- exp2() Tensor
See
torch.exp2()
- expand(*sizes) Tensor
Returns a new view of the
selftensor with singleton dimensions expanded to a larger size.Passing -1 as the size for a dimension means not changing the size of that dimension.
Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1.
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the
strideto 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory.- Parameters:
*sizes (torch.Size or int...) – the desired expanded size
Warning
More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.
Example:
>>> x = torch.tensor([[1], [2], [3]]) >>> x.size() torch.Size([3, 1]) >>> x.expand(3, 4) tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]]) >>> x.expand(-1, 4) # -1 means not changing the size of that dimension tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]])
- expand_as(other) Tensor
Expand this tensor to the same size as
other.self.expand_as(other)is equivalent toself.expand(other.size()).Please see
expand()for more information aboutexpand.- Parameters:
other (
torch.Tensor) – The result tensor has the same size asother.
- expm1() Tensor
See
torch.expm1()
- exponential_(lambd=1, *, generator=None) Tensor
Fills
selftensor with elements drawn from the PDF (probability density function):\[f(x) = \lambda e^{-\lambda x}, x > 0\]Note
In probability theory, exponential distribution is supported on interval [0, \(\inf\)) (i.e., \(x >= 0\)) implying that zero can be sampled from the exponential distribution. However,
torch.Tensor.exponential_()does not sample zero, which means that its actual support is the interval (0, \(\inf\)).Note that
torch.distributions.exponential.Exponential()is supported on the interval [0, \(\inf\)) and can sample zero.
- fill_diagonal_(fill_value, wrap=False) Tensor
Fill the main diagonal of a tensor that has at least 2-dimensions. When dims>2, all dimensions of input must be of equal length. This function modifies the input tensor in-place, and returns the input tensor.
- Parameters:
fill_value (Scalar) – the fill value
wrap (bool) – the diagonal ‘wrapped’ after N columns for tall matrices.
Example:
>>> a = torch.zeros(3, 3) >>> a.fill_diagonal_(5) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.]]) >>> b = torch.zeros(7, 3) >>> b.fill_diagonal_(5) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]) >>> c = torch.zeros(7, 3) >>> c.fill_diagonal_(5, wrap=True) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.], [0., 0., 0.], [5., 0., 0.], [0., 5., 0.], [0., 0., 5.]])
- fix() Tensor
See
torch.fix().
- flatten(start_dim=0, end_dim=-1) Tensor
See
torch.flatten()
- flip(dims) Tensor
See
torch.flip()
- fliplr() Tensor
See
torch.fliplr()
- flipud() Tensor
See
torch.flipud()
- float(memory_format=torch.preserve_format) Tensor
self.float()is equivalent toself.to(torch.float32). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- float_power_(exponent) Tensor
In-place version of
float_power()
- floor() Tensor
See
torch.floor()
- floor_divide_(value) Tensor
In-place version of
floor_divide()
- fmax(other) Tensor
See
torch.fmax()
- fmin(other) Tensor
See
torch.fmin()
- fmod(divisor) Tensor
See
torch.fmod()
- frac() Tensor
See
torch.frac()
- frexp(input) -> (Tensor mantissa, Tensor exponent)
See
torch.frexp()
- gather(dim, index) Tensor
See
torch.gather()
- gcd(other) Tensor
See
torch.gcd()
- ge(other) Tensor
See
torch.ge().
- geometric_(p, *, generator=None) Tensor
Fills
selftensor with elements drawn from the geometric distribution:\[P(X=k) = (1 - p)^{k - 1} p, k = 1, 2, ...\]Note
torch.Tensor.geometric_()k-th trial is the first success hence draws samples in \(\{1, 2, \ldots\}\), whereastorch.distributions.geometric.Geometric()\((k+1)\)-th trial is the first success hence draws samples in \(\{0, 1, \ldots\}\).
- geqrf()
See
torch.geqrf()
- ger(vec2) Tensor
See
torch.ger()
- get_device() -> Device ordinal (Integer)
For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, this function returns -1.
Example:
>>> x = torch.randn(3, 4, 5, device='cuda:0') >>> x.get_device() 0 >>> x.cpu().get_device() -1
- grad
This attribute is
Noneby default and becomes a Tensor the first time a call tobackward()computes gradients forself. The attribute will then contain the gradients computed and future calls tobackward()will accumulate (add) gradients into it.
- greater(other) Tensor
See
torch.greater().
- greater_equal_(other) Tensor
In-place version of
greater_equal().
- gt(other) Tensor
See
torch.gt().
- half(memory_format=torch.preserve_format) Tensor
self.half()is equivalent toself.to(torch.float16). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- has_names()
Is
Trueif any of this tensor’s dimensions are named. Otherwise, isFalse.
- heaviside_(values) Tensor
In-place version of
heaviside()
- histc(bins=100, min=0, max=0) Tensor
See
torch.histc()
- histogram(input, bins, *, range=None, weight=None, density=False)
- hsplit(split_size_or_sections) List of Tensors
See
torch.hsplit()
- hypot(other) Tensor
See
torch.hypot()
- i0() Tensor
See
torch.i0()
- igamma(other) Tensor
See
torch.igamma()
- igammac(other) Tensor
See
torch.igammac()
- imag
Returns a new tensor containing imaginary values of the
selftensor. The returned tensor andselfshare the same underlying storage.Warning
imag()is only supported for tensors with complex dtypes.- Example::
>>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.imag tensor([ 0.3553, -0.7896, -0.0633, -0.8119])
- index_add(dim, index, source, *, alpha=1) Tensor
Out-of-place version of
torch.Tensor.index_add_().
- index_add_(dim, index, source, *, alpha=1) Tensor
Accumulate the elements of
alphatimessourceinto theselftensor by adding to the indices in the order given inindex. For example, ifdim == 0,index[i] == j, andalpha=-1, then theith row ofsourceis subtracted from thejth row ofself.The
dimth dimension ofsourcemust have the same size as the length ofindex(which must be a vector), and all other dimensions must matchself, or an error will be raised.For a 3-D tensor the output is given as:
self[index[i], :, :] += alpha * src[i, :, :] # if dim == 0 self[:, index[i], :] += alpha * src[:, i, :] # if dim == 1 self[:, :, index[i]] += alpha * src[:, :, i] # if dim == 2
Note
This operation may behave nondeterministically when given tensors on a CUDA device. See /notes/randomness for more information.
- Parameters:
- Keyword Arguments:
alpha (Number) – the scalar multiplier for
source
Example:
>>> x = torch.ones(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_add_(0, index, t) tensor([[ 2., 3., 4.], [ 1., 1., 1.], [ 8., 9., 10.], [ 1., 1., 1.], [ 5., 6., 7.]]) >>> x.index_add_(0, index, t, alpha=-1) tensor([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]])
- index_copy(dim, index, tensor2) Tensor
Out-of-place version of
torch.Tensor.index_copy_().
- index_copy_(dim, index, tensor) Tensor
Copies the elements of
tensorinto theselftensor by selecting the indices in the order given inindex. For example, ifdim == 0andindex[i] == j, then theith row oftensoris copied to thejth row ofself.The
dimth dimension oftensormust have the same size as the length ofindex(which must be a vector), and all other dimensions must matchself, or an error will be raised.Note
If
indexcontains duplicate entries, multiple elements fromtensorwill be copied to the same index ofself. The result is nondeterministic since it depends on which copy occurs last.- Parameters:
Example:
>>> x = torch.zeros(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_copy_(0, index, t) tensor([[ 1., 2., 3.], [ 0., 0., 0.], [ 7., 8., 9.], [ 0., 0., 0.], [ 4., 5., 6.]])
- index_fill(dim, index, value) Tensor
Out-of-place version of
torch.Tensor.index_fill_().
- index_fill_(dim, index, value) Tensor
Fills the elements of the
selftensor with valuevalueby selecting the indices in the order given inindex.- Parameters:
- Example::
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 2]) >>> x.index_fill_(1, index, -1) tensor([[-1., 2., -1.], [-1., 5., -1.], [-1., 8., -1.]])
- index_put(indices, values, accumulate=False) Tensor
Out-place version of
index_put_().
- index_put_(indices, values, accumulate=False) Tensor
Puts values from the tensor
valuesinto the tensorselfusing the indices specified inindices(which is a tuple of Tensors). The expressiontensor.index_put_(indices, values)is equivalent totensor[indices] = values. Returnsself.If
accumulateisTrue, the elements invaluesare added toself. If accumulate isFalse, the behavior is undefined if indices contain duplicate elements.
- index_reduce_(dim, index, source, reduce, *, include_self=True) Tensor
Accumulate the elements of
sourceinto theselftensor by accumulating to the indices in the order given inindexusing the reduction given by thereduceargument. For example, ifdim == 0,index[i] == j,reduce == prodandinclude_self == Truethen theith row ofsourceis multiplied by thejth row ofself. Ifinclude_self="True", the values in theselftensor are included in the reduction, otherwise, rows in theselftensor that are accumulated to are treated as if they were filled with the reduction identites.The
dimth dimension ofsourcemust have the same size as the length ofindex(which must be a vector), and all other dimensions must matchself, or an error will be raised.For a 3-D tensor with
reduce="prod"andinclude_self=Truethe output is given as:self[index[i], :, :] *= src[i, :, :] # if dim == 0 self[:, index[i], :] *= src[:, i, :] # if dim == 1 self[:, :, index[i]] *= src[:, :, i] # if dim == 2
Note
This operation may behave nondeterministically when given tensors on a CUDA device. See /notes/randomness for more information.
Note
This function only supports floating point tensors.
Warning
This function is in beta and may change in the near future.
- Parameters:
- Keyword Arguments:
include_self (bool) – whether the elements from the
selftensor are included in the reduction
Example:
>>> x = torch.empty(5, 3).fill_(2) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2, 0]) >>> x.index_reduce_(0, index, t, 'prod') tensor([[20., 44., 72.], [ 2., 2., 2.], [14., 16., 18.], [ 2., 2., 2.], [ 8., 10., 12.]]) >>> x = torch.empty(5, 3).fill_(2) >>> x.index_reduce_(0, index, t, 'prod', include_self=False) tensor([[10., 22., 36.], [ 2., 2., 2.], [ 7., 8., 9.], [ 2., 2., 2.], [ 4., 5., 6.]])
- indices() Tensor
Return the indices tensor of a sparse COO tensor.
Warning
Throws an error if
selfis not a sparse COO tensor.See also
Tensor.values().Note
This method can only be called on a coalesced sparse tensor. See
Tensor.coalesce()for details.
- inner(other) Tensor
See
torch.inner().
- int(memory_format=torch.preserve_format) Tensor
self.int()is equivalent toself.to(torch.int32). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- int_repr() Tensor
Given a quantized Tensor,
self.int_repr()returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.
- inverse() Tensor
See
torch.inverse()
- ipu(device=None, non_blocking=False, memory_format=torch.preserve_format) Tensor
Returns a copy of this object in IPU memory.
If this object is already in IPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters:
device (
torch.device) – The destination IPU device. Defaults to the current IPU device.non_blocking (bool) – If
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default:False.memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- is_coalesced() bool
Returns
Trueifselfis a sparse COO tensor that is coalesced,Falseotherwise.Warning
Throws an error if
selfis not a sparse COO tensor.See
coalesce()and uncoalesced tensors.
- is_contiguous(memory_format=torch.contiguous_format) bool
Returns True if
selftensor is contiguous in memory in the order specified by memory format.- Parameters:
memory_format (
torch.memory_format, optional) – Specifies memory allocation order. Default:torch.contiguous_format.
- is_cpu
Is
Trueif the Tensor is stored on the CPU,Falseotherwise.
- is_cuda
Is
Trueif the Tensor is stored on the GPU,Falseotherwise.
- is_ipu
Is
Trueif the Tensor is stored on the IPU,Falseotherwise.
- is_leaf
All Tensors that have
requires_gradwhich isFalsewill be leaf Tensors by convention.For Tensors that have
requires_gradwhich isTrue, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and sograd_fnis None.Only leaf Tensors will have their
gradpopulated during a call tobackward(). To getgradpopulated for non-leaf Tensors, you can useretain_grad().Example:
>>> a = torch.rand(10, requires_grad=True) >>> a.is_leaf True >>> b = torch.rand(10, requires_grad=True).cuda() >>> b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor >>> c = torch.rand(10, requires_grad=True) + 2 >>> c.is_leaf False # c was created by the addition operation >>> d = torch.rand(10).cuda() >>> d.is_leaf True # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine) >>> e = torch.rand(10).cuda().requires_grad_() >>> e.is_leaf True # e requires gradients and has no operations creating it >>> f = torch.rand(10, requires_grad=True, device="cuda") >>> f.is_leaf True # f requires grad, has no operation creating it
- is_meta
Is
Trueif the Tensor is a meta tensor,Falseotherwise. Meta tensors are like normal tensors, but they carry no data.
- is_mps
Is
Trueif the Tensor is stored on the MPS device,Falseotherwise.
- is_pinned()
Returns true if this tensor resides in pinned memory.
- is_quantized
Is
Trueif the Tensor is quantized,Falseotherwise.
- is_set_to(tensor) bool
Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).
Checks if tensor is in shared memory.
This is always
Truefor CUDA tensors.
- is_sparse
Is
Trueif the Tensor uses sparse COO storage layout,Falseotherwise.
- is_sparse_csr
Is
Trueif the Tensor uses sparse CSR storage layout,Falseotherwise.
- is_xla
Is
Trueif the Tensor is stored on an XLA device,Falseotherwise.
- is_xpu
Is
Trueif the Tensor is stored on the XPU,Falseotherwise.
- isclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) Tensor
See
torch.isclose()
- isfinite() Tensor
See
torch.isfinite()
- isinf() Tensor
See
torch.isinf()
- isnan() Tensor
See
torch.isnan()
- isneginf() Tensor
See
torch.isneginf()
- isposinf() Tensor
See
torch.isposinf()
- isreal() Tensor
See
torch.isreal()
- istft(n_fft: int, hop_length: int | None = None, win_length: int | None = None, window: Tensor | None = None, center: bool = True, normalized: bool = False, onesided: bool | None = None, length: int | None = None, return_complex: bool = False)[source]
See
torch.istft()
- item() number
Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see
tolist().This operation is not differentiable.
Example:
>>> x = torch.tensor([1.0]) >>> x.item() 1.0
- itemsize
Alias for
element_size()
- kron(other) Tensor
See
torch.kron()
- kthvalue(k, dim=None, keepdim=False)
See
torch.kthvalue()
- lcm(other) Tensor
See
torch.lcm()
- ldexp(other) Tensor
See
torch.ldexp()
- le(other) Tensor
See
torch.le().
- lerp(end, weight) Tensor
See
torch.lerp()
- less()
lt(other) -> Tensor
See
torch.less().
- less_equal(other) Tensor
See
torch.less_equal().
- less_equal_(other) Tensor
In-place version of
less_equal().
- lgamma() Tensor
See
torch.lgamma()
- log() Tensor
See
torch.log()
- log10() Tensor
See
torch.log10()
- log1p() Tensor
See
torch.log1p()
- log2() Tensor
See
torch.log2()
- log_normal_(mean=1, std=2, *, generator=None)
Fills
selftensor with numbers samples from the log-normal distribution parameterized by the given mean \(\mu\) and standard deviation \(\sigma\). Note thatmeanandstdare the mean and standard deviation of the underlying normal distribution, and not of the returned distribution:\[f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}}\]
- logdet() Tensor
See
torch.logdet()
- logical_and_() Tensor
In-place version of
logical_and()
- logical_not_() Tensor
In-place version of
logical_not()
- logical_or_() Tensor
In-place version of
logical_or()
- logical_xor_() Tensor
In-place version of
logical_xor()
- logit() Tensor
See
torch.logit()
- long(memory_format=torch.preserve_format) Tensor
self.long()is equivalent toself.to(torch.int64). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- lt(other) Tensor
See
torch.lt().
- lu(pivot=True, get_infos=False)[source]
See
torch.lu()
- lu_solve(LU_data, LU_pivots) Tensor
See
torch.lu_solve()
- mT
Returns a view of this tensor with the last two dimensions transposed.
x.mTis equivalent tox.transpose(-2, -1).
- map_(tensor, callable)
Applies
callablefor each element inselftensor and the giventensorand stores the results inselftensor.selftensor and the giventensormust be broadcastable.The
callableshould have the signature:def callable(a, b) -> number
- masked_fill(mask, value) Tensor
Out-of-place version of
torch.Tensor.masked_fill_()
- masked_fill_(mask, value)
Fills elements of
selftensor withvaluewheremaskis True. The shape ofmaskmust be broadcastable with the shape of the underlying tensor.- Parameters:
mask (BoolTensor) – the boolean mask
value (float) – the value to fill in with
- masked_scatter(mask, tensor) Tensor
Out-of-place version of
torch.Tensor.masked_scatter_()Note
The inputs
selfandmaskbroadcast.Example
>>> self = torch.tensor([0, 0, 0, 0, 0]) >>> mask = torch.tensor([[0, 0, 0, 1, 1], [1, 1, 0, 1, 1]], dtype=torch.bool) >>> source = torch.tensor([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> self.masked_scatter(mask, source) tensor([[0, 0, 0, 0, 1], [2, 3, 0, 4, 5]])
- masked_scatter_(mask, source)
Copies elements from
sourceintoselftensor at positions where themaskis True. Elements fromsourceare copied intoselfstarting at position 0 ofsourceand continuing in order one-by-one for each occurrence ofmaskbeing True. The shape ofmaskmust be broadcastable with the shape of the underlying tensor. Thesourceshould have at least as many elements as the number of ones inmask.- Parameters:
mask (BoolTensor) – the boolean mask
source (Tensor) – the tensor to copy from
Note
The
maskoperates on theselftensor, not on the givensourcetensor.Example
>>> self = torch.tensor([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]) >>> mask = torch.tensor([[0, 0, 0, 1, 1], [1, 1, 0, 1, 1]], dtype=torch.bool) >>> source = torch.tensor([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> self.masked_scatter_(mask, source) tensor([[0, 0, 0, 0, 1], [2, 3, 0, 4, 5]])
- matmul(tensor2) Tensor
See
torch.matmul()
- matrix_power(n) Tensor
Note
matrix_power()is deprecated, usetorch.linalg.matrix_power()instead.Alias for
torch.linalg.matrix_power()
- max(dim=None, keepdim=False)
See
torch.max()
- maximum(other) Tensor
See
torch.maximum()
- mean(dim=None, keepdim=False, *, dtype=None) Tensor
See
torch.mean()
- median(dim=None, keepdim=False)
See
torch.median()
- min(dim=None, keepdim=False)
See
torch.min()
- minimum(other) Tensor
See
torch.minimum()
- mm(mat2) Tensor
See
torch.mm()
- mode(dim=None, keepdim=False)
See
torch.mode()
- module_load(other, assign=False)[source]
Defines how to transform
otherwhen loading it intoselfinload_state_dict().Used when
get_swap_module_params_on_conversion()isTrue.It is expected that
selfis a parameter or buffer in annn.Moduleandotheris the value in the state dictionary with the corresponding key, this method defines howotheris remapped before being swapped withselfviaswap_tensors()inload_state_dict().Note
This method should always return a new object that is not
selforother. For example, the default implementation returnsself.copy_(other).detach()ifassignisFalseorother.detach()ifassignisTrue.
- moveaxis(source, destination) Tensor
See
torch.moveaxis()
- movedim(source, destination) Tensor
See
torch.movedim()
- msort() Tensor
See
torch.msort()
- mtia(device=None, non_blocking=False, memory_format=torch.preserve_format) Tensor
Returns a copy of this object in MTIA memory.
If this object is already in MTIA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters:
device (
torch.device) – The destination MTIA device. Defaults to the current MTIA device.non_blocking (bool) – If
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default:False.memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- mul(value) Tensor
See
torch.mul().
- multiply(value) Tensor
See
torch.multiply().
- multiply_(value) Tensor
In-place version of
multiply().
- mv(vec) Tensor
See
torch.mv()
- mvlgamma(p) Tensor
See
torch.mvlgamma()
- mvlgamma_(p) Tensor
In-place version of
mvlgamma()
- names
Stores names for each of this tensor’s dimensions.
names[idx]corresponds to the name of tensor dimensionidx. Names are either a string if the dimension is named orNoneif the dimension is unnamed.Dimension names may contain characters or underscore. Furthermore, a dimension name must be a valid Python variable name (i.e., does not start with underscore).
Tensors may not have two named dimensions with the same name.
Warning
The named tensor API is experimental and subject to change.
- nan_to_num(nan=0.0, posinf=None, neginf=None) Tensor
See
torch.nan_to_num().
- nan_to_num_(nan=0.0, posinf=None, neginf=None) Tensor
In-place version of
nan_to_num().
- nanmean(dim=None, keepdim=False, *, dtype=None) Tensor
See
torch.nanmean()
- nanmedian(dim=None, keepdim=False)
- nansum(dim=None, keepdim=False, dtype=None) Tensor
See
torch.nansum()
- narrow(dimension, start, length) Tensor
See
torch.narrow().
- narrow_copy(dimension, start, length) Tensor
See
torch.narrow_copy().
- nbytes
Returns the number of bytes consumed by the “view” of elements of the Tensor if the Tensor does not use sparse storage layout. Defined to be
numel()*element_size()
- ne(other) Tensor
See
torch.ne().
- neg() Tensor
See
torch.neg()
- negative() Tensor
See
torch.negative()
- negative_() Tensor
In-place version of
negative()
- new_empty(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) Tensor
Returns a Tensor of size
sizefilled with uninitialized data. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.- Parameters:
size (int...) – a list, tuple, or
torch.Sizeof integers defining the shape of the output tensor.- Keyword Arguments:
dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor.device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor.requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False.layout (
torch.layout, optional) – the desired layout of returned Tensor. Default:torch.strided.pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:
False.
Example:
>>> tensor = torch.ones(()) >>> tensor.new_empty((2, 3)) tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30], [ 3.0949e-41, 4.4842e-44, 0.0000e+00]])
- new_empty_strided(size, stride, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) Tensor
Returns a Tensor of size
sizeand stridesstridefilled with uninitialized data. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.- Parameters:
size (int...) – a list, tuple, or
torch.Sizeof integers defining the shape of the output tensor.- Keyword Arguments:
dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor.device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor.requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False.layout (
torch.layout, optional) – the desired layout of returned Tensor. Default:torch.strided.pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:
False.
Example:
>>> tensor = torch.ones(()) >>> tensor.new_empty_strided((2, 3), (3, 1)) tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30], [ 3.0949e-41, 4.4842e-44, 0.0000e+00]])
- new_full(size, fill_value, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) Tensor
Returns a Tensor of size
sizefilled withfill_value. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.- Parameters:
fill_value (scalar) – the number to fill the output tensor with.
- Keyword Arguments:
dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor.device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor.requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False.layout (
torch.layout, optional) – the desired layout of returned Tensor. Default:torch.strided.pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:
False.
Example:
>>> tensor = torch.ones((2,), dtype=torch.float64) >>> tensor.new_full((3, 4), 3.141592) tensor([[ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)
- new_ones(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) Tensor
Returns a Tensor of size
sizefilled with1. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.- Parameters:
size (int...) – a list, tuple, or
torch.Sizeof integers defining the shape of the output tensor.- Keyword Arguments:
dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor.device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor.requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False.layout (
torch.layout, optional) – the desired layout of returned Tensor. Default:torch.strided.pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:
False.
Example:
>>> tensor = torch.tensor((), dtype=torch.int32) >>> tensor.new_ones((2, 3)) tensor([[ 1, 1, 1], [ 1, 1, 1]], dtype=torch.int32)
- new_tensor(data, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) Tensor
Returns a new Tensor with
dataas the tensor data. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.Warning
new_tensor()always copiesdata. If you have a Tensordataand want to avoid a copy, usetorch.Tensor.requires_grad_()ortorch.Tensor.detach(). If you have a numpy array and want to avoid a copy, usetorch.from_numpy().Warning
When data is a tensor x,
new_tensor()reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Thereforetensor.new_tensor(x)is equivalent tox.clone().detach()andtensor.new_tensor(x, requires_grad=True)is equivalent tox.clone().detach().requires_grad_(True). The equivalents usingclone()anddetach()are recommended.- Parameters:
data (array_like) – The returned Tensor copies
data.- Keyword Arguments:
dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor.device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor.requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False.layout (
torch.layout, optional) – the desired layout of returned Tensor. Default:torch.strided.pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:
False.
Example:
>>> tensor = torch.ones((2,), dtype=torch.int8) >>> data = [[0, 1], [2, 3]] >>> tensor.new_tensor(data) tensor([[ 0, 1], [ 2, 3]], dtype=torch.int8)
- new_zeros(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) Tensor
Returns a Tensor of size
sizefilled with0. By default, the returned Tensor has the sametorch.dtypeandtorch.deviceas this tensor.- Parameters:
size (int...) – a list, tuple, or
torch.Sizeof integers defining the shape of the output tensor.- Keyword Arguments:
dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, sametorch.dtypeas this tensor.device (
torch.device, optional) – the desired device of returned tensor. Default: if None, sametorch.deviceas this tensor.requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False.layout (
torch.layout, optional) – the desired layout of returned Tensor. Default:torch.strided.pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:
False.
Example:
>>> tensor = torch.tensor((), dtype=torch.float64) >>> tensor.new_zeros((2, 3)) tensor([[ 0., 0., 0.], [ 0., 0., 0.]], dtype=torch.float64)
- nextafter_(other) Tensor
In-place version of
nextafter()
- nonzero() LongTensor
See
torch.nonzero()
- nonzero_static(input, *, size, fill_value=-1) Tensor
Returns a 2-D tensor where each row is the index for a non-zero value. The returned Tensor has the same torch.dtype as torch.nonzero().
- Parameters:
input (Tensor) – the input tensor to count non-zero elements.
- Keyword Arguments:
size (int) – the size of non-zero elements expected to be included in the out tensor. Pad the out tensor with fill_value if the size is larger than total number of non-zero elements, truncate out tensor if size is smaller. The size must be a non-negative integer.
fill_value (int) – the value to fill the output tensor with when size is larger than the total number of non-zero elements. Default is -1 to represent invalid index.
Example
# Example 1: Padding >>> input_tensor = torch.tensor([[1, 0], [3, 2]]) >>> static_size = 4 >>> t = torch.nonzero_static(input_tensor, size = static_size) tensor([[ 0, 0],
[ 1, 0], [ 1, 1], [ -1, -1]], dtype=torch.int64)
# Example 2: Truncating >>> input_tensor = torch.tensor([[1, 0], [3, 2]]) >>> static_size = 2 >>> t = torch.nonzero_static(input_tensor, size = static_size) tensor([[ 0, 0],
[ 1, 0]], dtype=torch.int64)
# Example 3: 0 size >>> input_tensor = torch.tensor([10]) >>> static_size = 0 >>> t = torch.nonzero_static(input_tensor, size = static_size) tensor([], size=(0, 1), dtype=torch.int64)
# Example 4: 0 rank input >>> input_tensor = torch.tensor(10) >>> static_size = 2 >>> t = torch.nonzero_static(input_tensor, size = static_size) tensor([], size=(2, 0), dtype=torch.int64)
- normal_(mean=0, std=1, *, generator=None) Tensor
Fills
selftensor with elements samples from the normal distribution parameterized bymeanandstd.
- not_equal(other) Tensor
See
torch.not_equal().
- not_equal_(other) Tensor
In-place version of
not_equal().
- numel() int
See
torch.numel()
- numpy(*, force=False) numpy.ndarray
Returns the tensor as a NumPy
ndarray.If
forceisFalse(the default), the conversion is performed only if the tensor is on the CPU, does not require grad, does not have its conjugate bit set, and is a dtype and layout that NumPy supports. The returned ndarray and the tensor will share their storage, so changes to the tensor will be reflected in the ndarray and vice versa.If
forceisTruethis is equivalent to callingt.detach().cpu().resolve_conj().resolve_neg().numpy(). If the tensor isn’t on the CPU or the conjugate or negative bit is set, the tensor won’t share its storage with the returned ndarray. SettingforcetoTruecan be a useful shorthand.- Parameters:
force (bool) – if
True, the ndarray may be a copy of the tensor instead of always sharing memory, defaults toFalse.
- orgqr(input2) Tensor
See
torch.orgqr()
- ormqr(input2, input3, left=True, transpose=False) Tensor
See
torch.ormqr()
- outer(vec2) Tensor
See
torch.outer().
- permute(*dims) Tensor
See
torch.permute()
- pinverse() Tensor
See
torch.pinverse()
- polygamma_(n) Tensor
In-place version of
polygamma()
- positive() Tensor
See
torch.positive()
- pow(exponent) Tensor
See
torch.pow()
- prod(dim=None, keepdim=False, dtype=None) Tensor
See
torch.prod()
- put(input, index, source, accumulate=False) Tensor
Out-of-place version of
torch.Tensor.put_(). input corresponds to self intorch.Tensor.put_().
- put_(index, source, accumulate=False) Tensor
Copies the elements from
sourceinto the positions specified byindex. For the purpose of indexing, theselftensor is treated as if it were a 1-D tensor.indexandsourceneed to have the same number of elements, but not necessarily the same shape.If
accumulateisTrue, the elements insourceare added toself. If accumulate isFalse, the behavior is undefined ifindexcontain duplicate elements.- Parameters:
Example:
>>> src = torch.tensor([[4, 3, 5], ... [6, 7, 8]]) >>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10])) tensor([[ 4, 9, 5], [ 10, 7, 8]])
- q_per_channel_axis() int
Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.
- q_per_channel_scales() Tensor
Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.
- q_per_channel_zero_points() Tensor
Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.
- q_scale() float
Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().
- q_zero_point() int
Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().
- qr(some=True)
See
torch.qr()
- qscheme() torch.qscheme
Returns the quantization scheme of a given QTensor.
- quantile(q, dim=None, keepdim=False, *, interpolation='linear') Tensor
See
torch.quantile()
- rad2deg() Tensor
See
torch.rad2deg()
- random_(from=0, to=None, *, generator=None) Tensor
Fills
selftensor with numbers sampled from the discrete uniform distribution over[from, to - 1]. If not specified, the values are usually only bounded byselftensor’s data type. However, for floating point types, if unspecified, range will be[0, 2^mantissa]to ensure that every value is representable. For example, torch.tensor(1, dtype=torch.double).random_() will be uniform in[0, 2^53].
- ravel() Tensor
see
torch.ravel()
- real
Returns a new tensor containing real values of the
selftensor for a complex-valued input tensor. The returned tensor andselfshare the same underlying storage.Returns
selfifselfis a real-valued tensor tensor.- Example::
>>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.real tensor([ 0.3100, -0.5445, -1.6492, -0.0638])
- reciprocal_() Tensor
In-place version of
reciprocal()
- record_stream(stream)
Marks the tensor as having been used by this stream. When the tensor is deallocated, ensure the tensor memory is not reused for another tensor until all work queued on
streamat the time of deallocation is complete.Note
The caching allocator is aware of only the stream where a tensor was allocated. Due to the awareness, it already correctly manages the life cycle of tensors on only one stream. But if a tensor is used on a stream different from the stream of origin, the allocator might reuse the memory unexpectedly. Calling this method lets the allocator know which streams have used the tensor.
Warning
This method is most suitable for use cases where you are providing a function that created a tensor on a side stream, and want users to be able to make use of the tensor without having to think carefully about stream safety when making use of them. These safety guarantees come at some performance and predictability cost (analogous to the tradeoff between GC and manual memory management), so if you are in a situation where you manage the full lifetime of your tensors, you may consider instead manually managing CUDA events so that calling this method is not necessary. In particular, when you call this method, on later allocations the allocator will poll the recorded stream to see if all operations have completed yet; you can potentially race with side stream computation and non-deterministically reuse or fail to reuse memory for an allocation.
You can safely use tensors allocated on side streams without
record_stream(); you must manually ensure that any non-creation stream uses of a tensor are synced back to the creation stream before you deallocate the tensor. As the CUDA caching allocator guarantees that the memory will only be reused with the same creation stream, this is sufficient to ensure that writes to future reallocations of the memory will be delayed until non-creation stream uses are done. (Counterintuitively, you may observe that on the CPU side we have already reallocated the tensor, even though CUDA kernels on the old tensor are still in progress. This is fine, because CUDA operations on the new tensor will appropriately wait for the old operations to complete, as they are all on the same stream.)Concretely, this looks like this:
with torch.cuda.stream(s0): x = torch.zeros(N) s1.wait_stream(s0) with torch.cuda.stream(s1): y = some_comm_op(x) ... some compute on s0 ... # synchronize creation stream s0 to side stream s1 # before deallocating x s0.wait_stream(s1) del x
Note that some discretion is required when deciding when to perform
s0.wait_stream(s1). In particular, if we were to wait immediately aftersome_comm_op, there wouldn’t be any point in having the side stream; it would be equivalent to have runsome_comm_opons0. Instead, the synchronization must be placed at some appropriate, later point in time where you expect the side streams1to have finished work. This location is typically identified via profiling, e.g., using Chrome traces producedtorch.autograd.profiler.profile.export_chrome_trace(). If you place the wait too early, work on s0 will block untils1has finished, preventing further overlapping of communication and computation. If you place the wait too late, you will use more memory than is strictly necessary (as you are keepingxlive for longer.) For a concrete example of how this guidance can be applied in practice, see this post: FSDP and CUDACachingAllocator.
- refine_names(*names)[source]
Refines the dimension names of
selfaccording tonames.Refining is a special case of renaming that “lifts” unnamed dimensions. A
Nonedim can be refined to have any name; a named dim can only be refined to have the same name.Because named tensors can coexist with unnamed tensors, refining names gives a nice way to write named-tensor-aware code that works with both named and unnamed tensors.
namesmay contain up to one Ellipsis (...). The Ellipsis is expanded greedily; it is expanded in-place to fillnamesto the same length asself.dim()using names from the corresponding indices ofself.names.Python 2 does not support Ellipsis but one may use a string literal instead (
'...').- Parameters:
names (iterable of str) – The desired names of the output tensor. May contain up to one Ellipsis.
Examples:
>>> imgs = torch.randn(32, 3, 128, 128) >>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W') >>> named_imgs.names ('N', 'C', 'H', 'W') >>> tensor = torch.randn(2, 3, 5, 7, 11) >>> tensor = tensor.refine_names('A', ..., 'B', 'C') >>> tensor.names ('A', None, None, 'B', 'C')
Warning
The named tensor API is experimental and subject to change.
- register_hook(hook)[source]
Registers a backward hook.
The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature:
hook(grad) -> Tensor or None
The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of
grad.This function returns a handle with a method
handle.remove()that removes the hook from the module.Note
See Backward Hooks execution for more information on how when this hook is executed, and how its execution is ordered relative to other hooks.
Example:
>>> v = torch.tensor([0., 0., 0.], requires_grad=True) >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient >>> v.backward(torch.tensor([1., 2., 3.])) >>> v.grad 2 4 6 [torch.FloatTensor of size (3,)] >>> h.remove() # removes the hook
- register_post_accumulate_grad_hook(hook)[source]
Registers a backward hook that runs after grad accumulation.
The hook will be called after all gradients for a tensor have been accumulated, meaning that the .grad field has been updated on that tensor. The post accumulate grad hook is ONLY applicable for leaf tensors (tensors without a .grad_fn field). Registering this hook on a non-leaf tensor will error!
The hook should have the following signature:
hook(param: Tensor) -> None
Note that, unlike other autograd hooks, this hook operates on the tensor that requires grad and not the grad itself. The hook can in-place modify and access its Tensor argument, including its .grad field.
This function returns a handle with a method
handle.remove()that removes the hook from the module.Note
See Backward Hooks execution for more information on how when this hook is executed, and how its execution is ordered relative to other hooks. Since this hook runs during the backward pass, it will run in no_grad mode (unless create_graph is True). You can use torch.enable_grad() to re-enable autograd within the hook if you need it.
Example:
>>> v = torch.tensor([0., 0., 0.], requires_grad=True) >>> lr = 0.01 >>> # simulate a simple SGD update >>> h = v.register_post_accumulate_grad_hook(lambda p: p.add_(p.grad, alpha=-lr)) >>> v.backward(torch.tensor([1., 2., 3.])) >>> v tensor([-0.0100, -0.0200, -0.0300], requires_grad=True) >>> h.remove() # removes the hook
- remainder_(divisor) Tensor
In-place version of
remainder()
- rename(*names, **rename_map)[source]
Renames dimension names of
self.There are two main usages:
self.rename(**rename_map)returns a view on tensor that has dims renamed as specified in the mappingrename_map.self.rename(*names)returns a view on tensor, renaming all dimensions positionally usingnames. Useself.rename(None)to drop names on a tensor.One cannot specify both positional args
namesand keyword argsrename_map.Examples:
>>> imgs = torch.rand(2, 3, 5, 7, names=('N', 'C', 'H', 'W')) >>> renamed_imgs = imgs.rename(N='batch', C='channels') >>> renamed_imgs.names ('batch', 'channels', 'H', 'W') >>> renamed_imgs = imgs.rename(None) >>> renamed_imgs.names (None, None, None, None) >>> renamed_imgs = imgs.rename('batch', 'channel', 'height', 'width') >>> renamed_imgs.names ('batch', 'channel', 'height', 'width')
Warning
The named tensor API is experimental and subject to change.
- renorm(p, dim, maxnorm) Tensor
See
torch.renorm()
- repeat(*repeats) Tensor
Repeats this tensor along the specified dimensions.
Unlike
expand(), this function copies the tensor’s data.Warning
repeat()behaves differently from numpy.repeat, but is more similar to numpy.tile. For the operator similar to numpy.repeat, seetorch.repeat_interleave().- Parameters:
repeat (torch.Size, int..., tuple of int or list of int) – The number of times to repeat this tensor along each dimension
Example:
>>> x = torch.tensor([1, 2, 3]) >>> x.repeat(4, 2) tensor([[ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3]]) >>> x.repeat(4, 2, 1).size() torch.Size([4, 2, 3])
- requires_grad
Is
Trueif gradients need to be computed for this Tensor,Falseotherwise.
- requires_grad_(requires_grad=True) Tensor
Change if autograd should record operations on this tensor: sets this tensor’s
requires_gradattribute in-place. Returns this tensor.requires_grad_()’s main use case is to tell autograd to begin recording operations on a Tensortensor. Iftensorhasrequires_grad=False(because it was obtained through a DataLoader, or required preprocessing or initialization),tensor.requires_grad_()makes it so that autograd will begin to record operations ontensor.- Parameters:
requires_grad (bool) – If autograd should record operations on this tensor. Default:
True.
Example:
>>> # Let's say we want to preprocess some saved weights and use >>> # the result as new weights. >>> saved_weights = [0.1, 0.2, 0.3, 0.25] >>> loaded_weights = torch.tensor(saved_weights) >>> weights = preprocess(loaded_weights) # some function >>> weights tensor([-0.5503, 0.4926, -2.1158, -0.8303]) >>> # Now, start to record operations done to weights >>> weights.requires_grad_() >>> out = weights.pow(2).sum() >>> out.backward() >>> weights.grad tensor([-1.1007, 0.9853, -4.2316, -1.6606])
- reshape(*shape) Tensor
Returns a tensor with the same data and number of elements as
selfbut with the specified shape. This method returns a view ifshapeis compatible with the current shape. Seetorch.Tensor.view()on when it is possible to return a view.See
torch.reshape()
- reshape_as(other) Tensor
Returns this tensor as the same shape as
other.self.reshape_as(other)is equivalent toself.reshape(other.sizes()). This method returns a view ifother.sizes()is compatible with the current shape. Seetorch.Tensor.view()on when it is possible to return a view.Please see
reshape()for more information aboutreshape.- Parameters:
other (
torch.Tensor) – The result tensor has the same shape asother.
- resize_(*sizes, memory_format=torch.contiguous_format) Tensor
Resizes
selftensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized.Warning
This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use
view(), which checks for contiguity, orreshape(), which copies data if needed. To change the size in-place with custom strides, seeset_().Note
If
torch.use_deterministic_algorithms()andtorch.utils.deterministic.fill_uninitialized_memoryare both set toTrue, new elements are initialized to prevent nondeterministic behavior from using the result as an input to an operation. Floating point and complex values are set to NaN, and integer values are set to the maximum value.- Parameters:
sizes (torch.Size or int...) – the desired size
memory_format (
torch.memory_format, optional) – the desired memory format of Tensor. Default:torch.contiguous_format. Note that memory format ofselfis going to be unaffected ifself.size()matchessizes.
Example:
>>> x = torch.tensor([[1, 2], [3, 4], [5, 6]]) >>> x.resize_(2, 2) tensor([[ 1, 2], [ 3, 4]])
- resize_as_(tensor, memory_format=torch.contiguous_format) Tensor
Resizes the
selftensor to be the same size as the specifiedtensor. This is equivalent toself.resize_(tensor.size()).- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of Tensor. Default:torch.contiguous_format. Note that memory format ofselfis going to be unaffected ifself.size()matchestensor.size().
- retain_grad() None
Enables this Tensor to have their
gradpopulated duringbackward(). This is a no-op for leaf tensors.
- retains_grad
Is
Trueif this Tensor is non-leaf and itsgradis enabled to be populated duringbackward(),Falseotherwise.
- roll(shifts, dims) Tensor
See
torch.roll()
- rot90(k, dims) Tensor
See
torch.rot90()
- round(decimals=0) Tensor
See
torch.round()
- rsqrt() Tensor
See
torch.rsqrt()
- scatter(dim, index, src) Tensor
Out-of-place version of
torch.Tensor.scatter_()
- scatter_(dim, index, src, *, reduce=None) Tensor
Writes all values from the tensor
srcintoselfat the indices specified in theindextensor. For each value insrc, its output index is specified by its index insrcfordimension != dimand by the corresponding value inindexfordimension = dim.For a 3-D tensor,
selfis updated as:self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2
This is the reverse operation of the manner described in
gather().self,indexandsrc(if it is a Tensor) should all have the same number of dimensions. It is also required thatindex.size(d) <= src.size(d)for all dimensionsd, and thatindex.size(d) <= self.size(d)for all dimensionsd != dim. Note thatindexandsrcdo not broadcast.Moreover, as for
gather(), the values ofindexmust be between0andself.size(dim) - 1inclusive.Warning
When indices are not unique, the behavior is non-deterministic (one of the values from
srcwill be picked arbitrarily) and the gradient will be incorrect (it will be propagated to all locations in the source that correspond to the same index)!Note
The backward pass is implemented only for
src.shape == index.shape.Additionally accepts an optional
reduceargument that allows specification of an optional reduction operation, which is applied to all values in the tensorsrcintoselfat the indices specified in theindex. For each value insrc, the reduction operation is applied to an index inselfwhich is specified by its index insrcfordimension != dimand by the corresponding value inindexfordimension = dim.Given a 3-D tensor and reduction using the multiplication operation,
selfis updated as:self[index[i][j][k]][j][k] *= src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] *= src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] *= src[i][j][k] # if dim == 2
Reducing with the addition operation is the same as using
scatter_add_().Warning
The reduce argument with Tensor
srcis deprecated and will be removed in a future PyTorch release. Please usescatter_reduce_()instead for more reduction options.- Parameters:
- Keyword Arguments:
reduce (str, optional) – reduction operation to apply, can be either
'add'or'multiply'.
Example:
>>> src = torch.arange(1, 11).reshape((2, 5)) >>> src tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]) >>> index = torch.tensor([[0, 1, 2, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src) tensor([[1, 0, 0, 4, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0]]) >>> index = torch.tensor([[0, 1, 2], [0, 1, 4]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src) tensor([[1, 2, 3, 0, 0], [6, 7, 0, 0, 8], [0, 0, 0, 0, 0]]) >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]), ... 1.23, reduce='multiply') tensor([[2.0000, 2.0000, 2.4600, 2.0000], [2.0000, 2.0000, 2.0000, 2.4600]]) >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]), ... 1.23, reduce='add') tensor([[2.0000, 2.0000, 3.2300, 2.0000], [2.0000, 2.0000, 2.0000, 3.2300]])
- scatter_(dim, index, value, *, reduce=None) Tensor:
Writes the value from
valueintoselfat the indices specified in theindextensor. This operation is equivalent to the previous version, with thesrctensor filled entirely withvalue.- Parameters:
dim (int) – the axis along which to index
index (LongTensor) – the indices of elements to scatter, can be either empty or of the same dimensionality as
src. When empty, the operation returnsselfunchanged.value (Scalar) – the value to scatter.
- Keyword Arguments:
reduce (str, optional) – reduction operation to apply, can be either
'add'or'multiply'.
Example:
>>> index = torch.tensor([[0, 1]]) >>> value = 2 >>> torch.zeros(3, 5).scatter_(0, index, value) tensor([[2., 0., 0., 0., 0.], [0., 2., 0., 0., 0.], [0., 0., 0., 0., 0.]])
- scatter_add(dim, index, src) Tensor
Out-of-place version of
torch.Tensor.scatter_add_()
- scatter_add_(dim, index, src) Tensor
Adds all values from the tensor
srcintoselfat the indices specified in theindextensor in a similar fashion asscatter_(). For each value insrc, it is added to an index inselfwhich is specified by its index insrcfordimension != dimand by the corresponding value inindexfordimension = dim.For a 3-D tensor,
selfis updated as:self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2
self,indexandsrcshould have same number of dimensions. It is also required thatindex.size(d) <= src.size(d)for all dimensionsd, and thatindex.size(d) <= self.size(d)for all dimensionsd != dim. Note thatindexandsrcdo not broadcast.Note
This operation may behave nondeterministically when given tensors on a CUDA device. See /notes/randomness for more information.
Note
The backward pass is implemented only for
src.shape == index.shape.- Parameters:
Example:
>>> src = torch.ones((2, 5)) >>> index = torch.tensor([[0, 1, 2, 0, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[1., 0., 0., 1., 1.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.]]) >>> index = torch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[2., 0., 0., 1., 1.], [0., 2., 0., 0., 0.], [0., 0., 2., 1., 1.]])
- scatter_reduce(dim, index, src, reduce, *, include_self=True) Tensor
Out-of-place version of
torch.Tensor.scatter_reduce_()
- scatter_reduce_(dim, index, src, reduce, *, include_self=True) Tensor
Reduces all values from the
srctensor to the indices specified in theindextensor in theselftensor using the applied reduction defined via thereduceargument ("sum","prod","mean","amax","amin"). For each value insrc, it is reduced to an index inselfwhich is specified by its index insrcfordimension != dimand by the corresponding value inindexfordimension = dim. Ifinclude_self="True", the values in theselftensor are included in the reduction.self,indexandsrcshould all have the same number of dimensions. It is also required thatindex.size(d) <= src.size(d)for all dimensionsd, and thatindex.size(d) <= self.size(d)for all dimensionsd != dim. Note thatindexandsrcdo not broadcast.For a 3-D tensor with
reduce="sum"andinclude_self=Truethe output is given as:self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2
Note
This operation may behave nondeterministically when given tensors on a CUDA device. See /notes/randomness for more information.
Note
The backward pass is implemented only for
src.shape == index.shape.Warning
This function is in beta and may change in the near future.
- Parameters:
dim (int) – the axis along which to index
index (LongTensor) – the indices of elements to scatter and reduce.
src (Tensor) – the source elements to scatter and reduce
reduce (str) – the reduction operation to apply for non-unique indices (
"sum","prod","mean","amax","amin")include_self (bool) – whether elements from the
selftensor are included in the reduction
Example:
>>> src = torch.tensor([1., 2., 3., 4., 5., 6.]) >>> index = torch.tensor([0, 1, 0, 1, 2, 1]) >>> input = torch.tensor([1., 2., 3., 4.]) >>> input.scatter_reduce(0, index, src, reduce="sum") tensor([5., 14., 8., 4.]) >>> input.scatter_reduce(0, index, src, reduce="sum", include_self=False) tensor([4., 12., 5., 4.]) >>> input2 = torch.tensor([5., 4., 3., 2.]) >>> input2.scatter_reduce(0, index, src, reduce="amax") tensor([5., 6., 5., 2.]) >>> input2.scatter_reduce(0, index, src, reduce="amax", include_self=False) tensor([3., 6., 5., 2.])
- select(dim, index) Tensor
See
torch.select()
- set_(source=None, storage_offset=0, size=None, stride=None) Tensor
Sets the underlying storage, size, and strides. If
sourceis a tensor,selftensor will share the same storage and have the same size and strides assource. Changes to elements in one tensor will be reflected in the other.If
sourceis aStorage, the method sets the underlying storage, offset, size, and stride.- Parameters:
source (Tensor or Storage) – the tensor or storage to use
storage_offset (int, optional) – the offset in the storage
size (torch.Size, optional) – the desired size. Defaults to the size of the source.
stride (tuple, optional) – the desired stride. Defaults to C-contiguous strides.
- sgn() Tensor
See
torch.sgn()
- shape
Returns the size of the
selftensor. Alias forsize.See also
Tensor.size().Example:
>>> t = torch.empty(3, 4, 5) >>> t.size() torch.Size([3, 4, 5]) >>> t.shape torch.Size([3, 4, 5])
Moves the underlying storage to shared memory.
This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.
See
torch.UntypedStorage.share_memory_()for more details.
- short(memory_format=torch.preserve_format) Tensor
self.short()is equivalent toself.to(torch.int16). Seeto().- Parameters:
memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.
- sigmoid() Tensor
See
torch.sigmoid()
- sign() Tensor
See
torch.sign()
- signbit() Tensor
See
torch.signbit()
- sin() Tensor
See
torch.sin()
- sinc() Tensor
See
torch.sinc()
- sinh() Tensor
See
torch.sinh()
- size(dim=None) torch.Size or int
Returns the size of the
selftensor. Ifdimis not specified, the returned value is atorch.Size, a subclass oftuple. Ifdimis specified, returns an int holding the size of that dimension.- Parameters:
dim (int, optional) – The dimension for which to retrieve the size.
Example:
>>> t = torch.empty(3, 4, 5) >>> t.size() torch.Size([3, 4, 5]) >>> t.size(dim=1) 4
- slogdet()
See
torch.slogdet()
- smm(mat) Tensor
See
torch.smm()
- softmax(dim) Tensor
Alias for
torch.nn.functional.softmax().
- sort(dim=-1, descending=False)
See
torch.sort()
- sparse_dim() int
Return the number of sparse dimensions in a sparse tensor
self.Note
Returns
0ifselfis not a sparse tensor.See also
Tensor.dense_dim()and hybrid tensors.
- sparse_mask(mask) Tensor
Returns a new sparse tensor with values from a strided tensor
selffiltered by the indices of the sparse tensormask. The values ofmasksparse tensor are ignored.selfandmasktensors must have the same shape.Note
The returned sparse tensor might contain duplicate values if
maskis not coalesced. It is therefore advisable to passmask.coalesce()if such behavior is not desired.Note
The returned sparse tensor has the same indices as the sparse tensor
mask, even when the corresponding values inselfare zeros.- Parameters:
mask (Tensor) – a sparse tensor whose indices are used as a filter
Example:
>>> nse = 5 >>> dims = (5, 5, 2, 2) >>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)), ... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse) >>> V = torch.randn(nse, dims[2], dims[3]) >>> S = torch.sparse_coo_tensor(I, V, dims).coalesce() >>> D = torch.randn(dims) >>> D.sparse_mask(S) tensor(indices=tensor([[0, 0, 0, 2], [0, 1, 4, 3]]), values=tensor([[[ 1.6550, 0.2397], [-0.1611, -0.0779]], [[ 0.2326, -1.0558], [ 1.4711, 1.9678]], [[-0.5138, -0.0411], [ 1.9417, 0.5158]], [[ 0.0793, 0.0036], [-0.2569, -0.1055]]]), size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)
- sparse_resize_(size, sparse_dim, dense_dim) Tensor
Resizes
selfsparse tensor to the desired size and the number of sparse and dense dimensions.Note
If the number of specified elements in
selfis zero, thensize,sparse_dim, anddense_dimcan be any size and positive integers such thatlen(size) == sparse_dim + dense_dim.If
selfspecifies one or more elements, however, then each dimension insizemust not be smaller than the corresponding dimension ofself,sparse_dimmust equal the number of sparse dimensions inself, anddense_dimmust equal the number of dense dimensions inself.Warning
Throws an error if
selfis not a sparse tensor.- Parameters:
size (torch.Size) – the desired size. If
selfis non-empty sparse tensor, the desired size cannot be smaller than the original size.sparse_dim (int) – the number of sparse dimensions
dense_dim (int) – the number of dense dimensions
- sparse_resize_and_clear_(size, sparse_dim, dense_dim) Tensor
Removes all specified elements from a sparse tensor
selfand resizesselfto the desired size and the number of sparse and dense dimensions.- Parameters:
size (torch.Size) – the desired size.
sparse_dim (int) – the number of sparse dimensions
dense_dim (int) – the number of dense dimensions
- split(split_size, dim=0)[source]
See
torch.split()
- sqrt() Tensor
See
torch.sqrt()
- square() Tensor
See
torch.square()
- squeeze(dim=None) Tensor
See
torch.squeeze()
- sspaddmm(mat1, mat2, *, beta=1, alpha=1) Tensor
See
torch.sspaddmm()
- std(dim=None, *, correction=1, keepdim=False) Tensor
See
torch.std()
- stft(n_fft: int, hop_length: int | None = None, win_length: int | None = None, window: Tensor | None = None, center: bool = True, pad_mode: str = 'reflect', normalized: bool = False, onesided: bool | None = None, return_complex: bool | None = None)[source]
See
torch.stft()Warning
This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result.
- storage() torch.TypedStorage[source]
Returns the underlying
TypedStorage.Warning
TypedStorageis deprecated. It will be removed in the future, andUntypedStoragewill be the only storage class. To access theUntypedStoragedirectly, useTensor.untyped_storage().
- storage_offset() int
Returns
selftensor’s offset in the underlying storage in terms of number of storage elements (not bytes).Example:
>>> x = torch.tensor([1, 2, 3, 4, 5]) >>> x.storage_offset() 0 >>> x[3:].storage_offset() 3
- stride(dim) tuple or int
Returns the stride of
selftensor.Stride is the jump necessary to go from one element to the next one in the specified dimension
dim. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimensiondim.- Parameters:
dim (int, optional) – the desired dimension in which stride is required
Example:
>>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) >>> x.stride() (5, 1) >>> x.stride(0) 5 >>> x.stride(-1) 1
- sub(other, *, alpha=1) Tensor
See
torch.sub().
- subtract(other, *, alpha=1) Tensor
See
torch.subtract().
- subtract_(other, *, alpha=1) Tensor
In-place version of
subtract().
- sum(dim=None, keepdim=False, dtype=None) Tensor
See
torch.sum()
- sum_to_size(*size) Tensor
Sum
thistensor tosize.sizemust be broadcastable tothistensor size.- Parameters:
size (int...) – a sequence of integers defining the shape of the output tensor.
- svd(some=True, compute_uv=True)
See
torch.svd()
- swapaxes(axis0, axis1) Tensor
See
torch.swapaxes()
- swapaxes_(axis0, axis1) Tensor
In-place version of
swapaxes()
- swapdims(dim0, dim1) Tensor
See
torch.swapdims()
- swapdims_(dim0, dim1) Tensor
In-place version of
swapdims()
- take(indices) Tensor
See
torch.take()
- tan() Tensor
See
torch.tan()
- tanh() Tensor
See
torch.tanh()
- tensor_split(indices_or_sections, dim=0) List of Tensors
- tile(dims) Tensor
See
torch.tile()
- to(*args, **kwargs) Tensor
Performs Tensor dtype and/or device conversion. A
torch.dtypeandtorch.deviceare inferred from the arguments ofself.to(*args, **kwargs).Note
If the
selfTensor already has the correcttorch.dtypeandtorch.device, thenselfis returned. Otherwise, the returned tensor is a copy ofselfwith the desiredtorch.dtypeandtorch.device.Here are the ways to call
to:- to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) Tensor
Returns a Tensor with the specified
dtype- Args:
memory_format (
torch.memory_format, optional): the desired memory format of returned Tensor. Default:torch.preserve_format.
- to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) Tensor
Returns a Tensor with the specified
deviceand (optional)dtype. IfdtypeisNoneit is inferred to beself.dtype. Whennon_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Whencopyis set, a new Tensor is created even when the Tensor already matches the desired conversion.- Args:
memory_format (
torch.memory_format, optional): the desired memory format of returned Tensor. Default:torch.preserve_format.
- to(other, non_blocking=False, copy=False) Tensor
Returns a Tensor with same
torch.dtypeandtorch.deviceas the Tensorother. Whennon_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Whencopyis set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example:
>>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu >>> tensor.to(torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64) >>> cuda0 = torch.device('cuda:0') >>> tensor.to(cuda0) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], device='cuda:0') >>> tensor.to(cuda0, dtype=torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') >>> other = torch.randn((), dtype=torch.float64, device=cuda0) >>> tensor.to(other, non_blocking=True) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
- to_dense(dtype=None, *, masked_grad=True) Tensor
Creates a strided copy of
selfifselfis not a strided tensor, otherwise returnsself.- Keyword Arguments:
{dtype}
masked_grad (bool, optional) – If set to
True(default) andselfhas a sparse layout then the backward ofto_dense()returnsgrad.sparse_mask(self).
Example:
>>> s = torch.sparse_coo_tensor( ... torch.tensor([[1, 1], ... [0, 2]]), ... torch.tensor([9, 10]), ... size=(3, 3)) >>> s.to_dense() tensor([[ 0, 0, 0], [ 9, 0, 10], [ 0, 0, 0]])
- to_sparse(sparseDims) Tensor
Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format.
- Parameters:
sparseDims (int, optional) – the number of sparse dimensions to include in the new sparse tensor
Example:
>>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]]) >>> d tensor([[ 0, 0, 0], [ 9, 0, 10], [ 0, 0, 0]]) >>> d.to_sparse() tensor(indices=tensor([[1, 1], [0, 2]]), values=tensor([ 9, 10]), size=(3, 3), nnz=2, layout=torch.sparse_coo) >>> d.to_sparse(1) tensor(indices=tensor([[1]]), values=tensor([[ 9, 0, 10]]), size=(3, 3), nnz=1, layout=torch.sparse_coo)
- to_sparse(*, layout=None, blocksize=None, dense_dim=None) Tensor
Returns a sparse tensor with the specified layout and blocksize. If the
selfis strided, the number of dense dimensions could be specified, and a hybrid sparse tensor will be created, with dense_dim dense dimensions and self.dim() - 2 - dense_dim batch dimension.Note
If the
selflayout and blocksize parameters match with the specified layout and blocksize, returnself. Otherwise, return a sparse tensor copy ofself.- Parameters:
layout (
torch.layout, optional) – The desired sparse layout. One oftorch.sparse_coo,torch.sparse_csr,torch.sparse_csc,torch.sparse_bsr, ortorch.sparse_bsc. Default: ifNone,torch.sparse_coo.blocksize (list, tuple,
torch.Size, optional) – Block size of the resulting BSR or BSC tensor. For other layouts, specifying the block size that is notNonewill result in a RuntimeError exception. A block size must be a tuple of length two such that its items evenly divide the two sparse dimensions.dense_dim (int, optional) – Number of dense dimensions of the resulting CSR, CSC, BSR or BSC tensor. This argument should be used only if
selfis a strided tensor, and must be a value between 0 and dimension ofselftensor minus two.
Example:
>>> x = torch.tensor([[1, 0], [0, 0], [2, 3]]) >>> x.to_sparse(layout=torch.sparse_coo) tensor(indices=tensor([[0, 2, 2], [0, 0, 1]]), values=tensor([1, 2, 3]), size=(3, 2), nnz=3, layout=torch.sparse_coo) >>> x.to_sparse(layout=torch.sparse_bsr, blocksize=(1, 2)) tensor(crow_indices=tensor([0, 1, 1, 2]), col_indices=tensor([0, 0]), values=tensor([[[1, 0]], [[2, 3]]]), size=(3, 2), nnz=2, layout=torch.sparse_bsr) >>> x.to_sparse(layout=torch.sparse_bsr, blocksize=(2, 1)) RuntimeError: Tensor size(-2) 3 needs to be divisible by blocksize[0] 2 >>> x.to_sparse(layout=torch.sparse_csr, blocksize=(3, 1)) RuntimeError: to_sparse for Strided to SparseCsr conversion does not use specified blocksize >>> x = torch.tensor([[[1], [0]], [[0], [0]], [[2], [3]]]) >>> x.to_sparse(layout=torch.sparse_csr, dense_dim=1) tensor(crow_indices=tensor([0, 1, 1, 3]), col_indices=tensor([0, 0, 1]), values=tensor([[1], [2], [3]]), size=(3, 2, 1), nnz=3, layout=torch.sparse_csr)
- to_sparse_bsc(blocksize, dense_dim) Tensor
Convert a tensor to a block sparse column (BSC) storage format of given blocksize. If the
selfis strided, then the number of dense dimensions could be specified, and a hybrid BSC tensor will be created, with dense_dim dense dimensions and self.dim() - 2 - dense_dim batch dimension.- Parameters:
blocksize (list, tuple,
torch.Size, optional) – Block size of the resulting BSC tensor. A block size must be a tuple of length two such that its items evenly divide the two sparse dimensions.dense_dim (int, optional) – Number of dense dimensions of the resulting BSC tensor. This argument should be used only if
selfis a strided tensor, and must be a value between 0 and dimension ofselftensor minus two.
Example:
>>> dense = torch.randn(10, 10) >>> sparse = dense.to_sparse_csr() >>> sparse_bsc = sparse.to_sparse_bsc((5, 5)) >>> sparse_bsc.row_indices() tensor([0, 1, 0, 1]) >>> dense = torch.zeros(4, 3, 1) >>> dense[0:2, 0] = dense[0:2, 2] = dense[2:4, 1] = 1 >>> dense.to_sparse_bsc((2, 1), 1) tensor(ccol_indices=tensor([0, 1, 2, 3]), row_indices=tensor([0, 1, 0]), values=tensor([[[[1.]], [[1.]]], [[[1.]], [[1.]]], [[[1.]], [[1.]]]]), size=(4, 3, 1), nnz=3, layout=torch.sparse_bsc)
- to_sparse_bsr(blocksize, dense_dim) Tensor
Convert a tensor to a block sparse row (BSR) storage format of given blocksize. If the
selfis strided, then the number of dense dimensions could be specified, and a hybrid BSR tensor will be created, with dense_dim dense dimensions and self.dim() - 2 - dense_dim batch dimension.- Parameters:
blocksize (list, tuple,
torch.Size, optional) – Block size of the resulting BSR tensor. A block size must be a tuple of length two such that its items evenly divide the two sparse dimensions.dense_dim (int, optional) – Number of dense dimensions of the resulting BSR tensor. This argument should be used only if
selfis a strided tensor, and must be a value between 0 and dimension ofselftensor minus two.
Example:
>>> dense = torch.randn(10, 10) >>> sparse = dense.to_sparse_csr() >>> sparse_bsr = sparse.to_sparse_bsr((5, 5)) >>> sparse_bsr.col_indices() tensor([0, 1, 0, 1]) >>> dense = torch.zeros(4, 3, 1) >>> dense[0:2, 0] = dense[0:2, 2] = dense[2:4, 1] = 1 >>> dense.to_sparse_bsr((2, 1), 1) tensor(crow_indices=tensor([0, 2, 3]), col_indices=tensor([0, 2, 1]), values=tensor([[[[1.]], [[1.]]], [[[1.]], [[1.]]], [[[1.]], [[1.]]]]), size=(4, 3, 1), nnz=3, layout=torch.sparse_bsr)
- to_sparse_coo()[source]
Convert a tensor to coordinate format.
Examples:
>>> dense = torch.randn(5, 5) >>> sparse = dense.to_sparse_coo() >>> sparse._nnz() 25
- to_sparse_csc() Tensor
Convert a tensor to compressed column storage (CSC) format. Except for strided tensors, only works with 2D tensors. If the
selfis strided, then the number of dense dimensions could be specified, and a hybrid CSC tensor will be created, with dense_dim dense dimensions and self.dim() - 2 - dense_dim batch dimension.- Parameters:
dense_dim (int, optional) – Number of dense dimensions of the resulting CSC tensor. This argument should be used only if
selfis a strided tensor, and must be a value between 0 and dimension ofselftensor minus two.
Example:
>>> dense = torch.randn(5, 5) >>> sparse = dense.to_sparse_csc() >>> sparse._nnz() 25 >>> dense = torch.zeros(3, 3, 1, 1) >>> dense[0, 0] = dense[1, 2] = dense[2, 1] = 1 >>> dense.to_sparse_csc(dense_dim=2) tensor(ccol_indices=tensor([0, 1, 2, 3]), row_indices=tensor([0, 2, 1]), values=tensor([[[1.]], [[1.]], [[1.]]]), size=(3, 3, 1, 1), nnz=3, layout=torch.sparse_csc)
- to_sparse_csr(dense_dim=None) Tensor
Convert a tensor to compressed row storage format (CSR). Except for strided tensors, only works with 2D tensors. If the
selfis strided, then the number of dense dimensions could be specified, and a hybrid CSR tensor will be created, with dense_dim dense dimensions and self.dim() - 2 - dense_dim batch dimension.- Parameters:
dense_dim (int, optional) – Number of dense dimensions of the resulting CSR tensor. This argument should be used only if
selfis a strided tensor, and must be a value between 0 and dimension ofselftensor minus two.
Example:
>>> dense = torch.randn(5, 5) >>> sparse = dense.to_sparse_csr() >>> sparse._nnz() 25 >>> dense = torch.zeros(3, 3, 1, 1) >>> dense[0, 0] = dense[1, 2] = dense[2, 1] = 1 >>> dense.to_sparse_csr(dense_dim=2) tensor(crow_indices=tensor([0, 1, 2, 3]), col_indices=tensor([0, 2, 1]), values=tensor([[[1.]], [[1.]], [[1.]]]), size=(3, 3, 1, 1), nnz=3, layout=torch.sparse_csr)
- tolist() list or number
Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with
item(). Tensors are automatically moved to the CPU first if necessary.This operation is not differentiable.
Examples:
>>> a = torch.randn(2, 2) >>> a.tolist() [[0.012766935862600803, 0.5415473580360413], [-0.08909505605697632, 0.7729271650314331]] >>> a[0,0].tolist() 0.012766935862600803
- topk(k, dim=None, largest=True, sorted=True)
See
torch.topk()
- trace() Tensor
See
torch.trace()
- transpose_(dim0, dim1) Tensor
In-place version of
transpose()
- triangular_solve(A, upper=True, transpose=False, unitriangular=False)
- tril(diagonal=0) Tensor
See
torch.tril()
- triu(diagonal=0) Tensor
See
torch.triu()
- true_divide_(value) Tensor
In-place version of
true_divide_()
- trunc() Tensor
See
torch.trunc()
- type(dtype=None, non_blocking=False, **kwargs) str or Tensor
Returns the type if dtype is not provided, else casts this object to the specified type.
If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters:
dtype (dtype or string) – The desired type
non_blocking (bool) – If
True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
asyncin place of thenon_blockingargument. Theasyncarg is deprecated.
- type_as(tensor) Tensor
Returns this tensor cast to the type of the given tensor.
This is a no-op if the tensor is already of the correct type. This is equivalent to
self.type(tensor.type())- Parameters:
tensor (Tensor) – the tensor which has the desired type
- unbind(dim=0) seq
See
torch.unbind()
- unflatten(dim, sizes) Tensor[source]
See
torch.unflatten().
- unfold(dimension, size, step) Tensor
Returns a view of the original tensor which contains all slices of size
sizefromselftensor in the dimensiondimension.Step between two slices is given by
step.If sizedim is the size of dimension
dimensionforself, the size of dimensiondimensionin the returned tensor will be (sizedim - size) / step + 1.An additional dimension of size
sizeis appended in the returned tensor.- Parameters:
Example:
>>> x = torch.arange(1., 8) >>> x tensor([ 1., 2., 3., 4., 5., 6., 7.]) >>> x.unfold(0, 2, 1) tensor([[ 1., 2.], [ 2., 3.], [ 3., 4.], [ 4., 5.], [ 5., 6.], [ 6., 7.]]) >>> x.unfold(0, 2, 2) tensor([[ 1., 2.], [ 3., 4.], [ 5., 6.]])
- uniform_(from=0, to=1, *, generator=None) Tensor
Fills
selftensor with numbers sampled from the continuous uniform distribution:\[f(x) = \dfrac{1}{\text{to} - \text{from}}\]
- unique(sorted=True, return_inverse=False, return_counts=False, dim=None)[source]
Returns the unique elements of the input tensor.
See
torch.unique()
- unique_consecutive(return_inverse=False, return_counts=False, dim=None)[source]
Eliminates all but the first element from every consecutive group of equivalent elements.
- unsafe_chunk(chunks, dim=0) List of Tensors
See
torch.unsafe_chunk()
- unsafe_split(split_size, dim=0) List of Tensors
See
torch.unsafe_split()
- unsqueeze_(dim) Tensor
In-place version of
unsqueeze()
- untyped_storage() torch.UntypedStorage
Returns the underlying
UntypedStorage.
- values() Tensor
Return the values tensor of a sparse COO tensor.
Warning
Throws an error if
selfis not a sparse COO tensor.See also
Tensor.indices().Note
This method can only be called on a coalesced sparse tensor. See
Tensor.coalesce()for details.
- var(dim=None, *, correction=1, keepdim=False) Tensor
See
torch.var()
- vdot(other) Tensor
See
torch.vdot()
- view(*shape) Tensor
Returns a new tensor with the same data as the
selftensor but of a differentshape.The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions \(d, d+1, \dots, d+k\) that satisfy the following contiguity-like condition that \(\forall i = d, \dots, d+k-1\),
\[\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1]\]Otherwise, it will not be possible to view
selftensor asshapewithout copying it (e.g., viacontiguous()). When it is unclear whether aview()can be performed, it is advisable to usereshape(), which returns a view if the shapes are compatible, and copies (equivalent to callingcontiguous()) otherwise.- Parameters:
shape (torch.Size or int...) – the desired size
Example:
>>> x = torch.randn(4, 4) >>> x.size() torch.Size([4, 4]) >>> y = x.view(16) >>> y.size() torch.Size([16]) >>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions >>> z.size() torch.Size([2, 8]) >>> a = torch.randn(1, 2, 3, 4) >>> a.size() torch.Size([1, 2, 3, 4]) >>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension >>> b.size() torch.Size([1, 3, 2, 4]) >>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory >>> c.size() torch.Size([1, 3, 2, 4]) >>> torch.equal(b, c) False
- view(dtype) Tensor
Returns a new tensor with the same data as the
selftensor but of a differentdtype.If the element size of
dtypeis different than that ofself.dtype, then the size of the last dimension of the output will be scaled proportionally. For instance, ifdtypeelement size is twice that ofself.dtype, then each pair of elements in the last dimension ofselfwill be combined, and the size of the last dimension of the output will be half that ofself. Ifdtypeelement size is half that ofself.dtype, then each element in the last dimension ofselfwill be split in two, and the size of the last dimension of the output will be double that ofself. For this to be possible, the following conditions must be true:self.dim()must be greater than 0.self.stride(-1)must be 1.
Additionally, if the element size of
dtypeis greater than that ofself.dtype, the following conditions must be true as well:self.size(-1)must be divisible by the ratio between the element sizes of the dtypes.self.storage_offset()must be divisible by the ratio between the element sizes of the dtypes.The strides of all dimensions, except the last dimension, must be divisible by the ratio between the element sizes of the dtypes.
If any of the above conditions are not met, an error is thrown.
Warning
This overload is not supported by TorchScript, and using it in a Torchscript program will cause undefined behavior.
- Parameters:
dtype (
torch.dtype) – the desired dtype
Example:
>>> x = torch.randn(4, 4) >>> x tensor([[ 0.9482, -0.0310, 1.4999, -0.5316], [-0.1520, 0.7472, 0.5617, -0.8649], [-2.4724, -0.0334, -0.2976, -0.8499], [-0.2109, 1.9913, -0.9607, -0.6123]]) >>> x.dtype torch.float32 >>> y = x.view(torch.int32) >>> y tensor([[ 1064483442, -1124191867, 1069546515, -1089989247], [-1105482831, 1061112040, 1057999968, -1084397505], [-1071760287, -1123489973, -1097310419, -1084649136], [-1101533110, 1073668768, -1082790149, -1088634448]], dtype=torch.int32) >>> y[0, 0] = 1000000000 >>> x tensor([[ 0.0047, -0.0310, 1.4999, -0.5316], [-0.1520, 0.7472, 0.5617, -0.8649], [-2.4724, -0.0334, -0.2976, -0.8499], [-0.2109, 1.9913, -0.9607, -0.6123]]) >>> x.view(torch.cfloat) tensor([[ 0.0047-0.0310j, 1.4999-0.5316j], [-0.1520+0.7472j, 0.5617-0.8649j], [-2.4724-0.0334j, -0.2976-0.8499j], [-0.2109+1.9913j, -0.9607-0.6123j]]) >>> x.view(torch.cfloat).size() torch.Size([4, 2]) >>> x.view(torch.uint8) tensor([[ 0, 202, 154, 59, 182, 243, 253, 188, 185, 252, 191, 63, 240, 22, 8, 191], [227, 165, 27, 190, 128, 72, 63, 63, 146, 203, 15, 63, 22, 106, 93, 191], [205, 59, 30, 192, 112, 206, 8, 189, 7, 95, 152, 190, 12, 147, 89, 191], [ 43, 246, 87, 190, 235, 226, 254, 63, 111, 240, 117, 191, 177, 191, 28, 191]], dtype=torch.uint8) >>> x.view(torch.uint8).size() torch.Size([4, 16])
- view_as(other) Tensor
View this tensor as the same size as
other.self.view_as(other)is equivalent toself.view(other.size()).Please see
view()for more information aboutview.- Parameters:
other (
torch.Tensor) – The result tensor has the same size asother.
- vsplit(split_size_or_sections) List of Tensors
See
torch.vsplit()
- where(condition, y) Tensor
self.where(condition, y)is equivalent totorch.where(condition, self, y). Seetorch.where()
- xlogy(other) Tensor
See
torch.xlogy()
- xpu(device=None, non_blocking=False, memory_format=torch.preserve_format) Tensor
Returns a copy of this object in XPU memory.
If this object is already in XPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters:
device (
torch.device) – The destination XPU device. Defaults to the current XPU device.non_blocking (bool) – If
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default:False.memory_format (
torch.memory_format, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format.