Tensors

The basic unit of a Pytorch is a tensor. A tensor is an n-dimensional array or a matrix. It contains elements of a single data type. It is used to take input and is also used to display output. Tensors are used for powerful computations in deep learning models. It is similar to NumPy array but it can run on GPUs. They can be created from an array, by initializing it to either zeroes or ones or random values or from NumPy arrays. The elements of tensors can be accessed just as we do in any other programming language and they can also be accessed within a specified range that is slicing can be used. Many mathematical operations can be performed on tensors. A small code to get a clear understanding of tensors 

Python3




import torch
  
x = torch.ones((3, 2))
print(x)
  
arr = [[3, 4]]   
tensor = torch.Tensor(arr)
print(tensor)


Output:

tensor([[1., 1.],
        [1., 1.],
        [1., 1.]])
tensor([[3., 4.]])

In the above code, we create tensors from an array and we also use ones to create a tensor.

Difference between Tensor and Variable in Pytorch

In this article, we are going to see the difference between a Tensor and a variable in Pytorch.

Pytorch is an open-source Machine learning library used for computer vision, Natural language processing, and deep neural network processing. It is a torch-based library. It contains a fundamental set of features that allow numerical computation, deployment, and optimization. Pytorch is built using the tensor class. It was developed by Facebook AI researchers in 2016. The two main features of Pytorch are: it is similar to NumPy but supports GPU, Automatic differentiation is used for the creation and training of deep learning networks and the Models can be deployed in mobile applications, therefore, making it is fast and easy to use. We must be familiar with some modules of Pytorch like nn(used to build neural networks), autograd( automatic differentiation to all the operations performed on the tensors), optim( to optimize the neural network weights to minimize loss), and utils(provide classes for data processing).

Similar Reads

Tensors

The basic unit of a Pytorch is a tensor. A tensor is an n-dimensional array or a matrix. It contains elements of a single data type. It is used to take input and is also used to display output. Tensors are used for powerful computations in deep learning models. It is similar to NumPy array but it can run on GPUs. They can be created from an array, by initializing it to either zeroes or ones or random values or from NumPy arrays. The elements of tensors can be accessed just as we do in any other programming language and they can also be accessed within a specified range that is slicing can be used. Many mathematical operations can be performed on tensors. A small code to get a clear understanding of tensors...

Variables

...

Difference between Tensor and a Variable in Pytorch

Variables act as a wrapper around a tensor. It supports all the operations that are being performed on a tensor. To support automatic differentiation for tensor’s gradients autograd was combined with variable. A variable comprises two parts: data and grad. Data refers to the raw tensor which the variable wraps, and grad refers to the gradient of the tensor. The basic use of variables is to calculate the gradient of tensors. It records a reference to the creator function. With the help of variables, we can build a computational graph as it represents a node in the graph....