Upsampling using torch.nn.Upsample

EXAMPLE 1:

syntax of torch.nn.Upsample method:

torch.nn.Upsample(size=None, scale_factor=None, mode=’nearest’, align_corners=None)

Parameters:
size: Output spatial size. It can be a tuple (height, width) or a single integer n for square.
scale_factor: The multiplier for the input size/resolution. It can be a tuple (h_scale, w_scale) or a single float scale.
mode: The upsampling algorithm to use. It can be nearest, linear, bilinear, bicubic, trilinear, or area. The default value is nearest.
align_corners: Whether the corner pixels of the input and output tensors should align. Default value is None.

torch.nn.Upsample is a module in PyTorch that upsamples the input tensor to the given scale factor. Here’s an example of using torch.nn.Upsample to upscale a tensor of shape (1, 1, 2).

Python3




#import the necessary library
import torch
  
# define a tensor and view as a 3D tensor
x = torch.tensor([1., 2.])
X = x.view(1,1,2)
print("Input Tensor Shape:",X.size())
print("Input Tensor:",X)
  
# Upsample with scale_factor 2 and mode = nearest
upsample1 = torch.nn.Upsample(scale_factor=2)
output1 = upsample1(X)
print(upsample1,'-->>', output1)
  
# # Upsample with scale_factor 3 and mode = nearest
upsample2 = torch.nn.Upsample(scale_factor=3)
output2 = upsample2(X)
print(upsample2,'-->>', output2)
  
# Upsample with scale_factor 2 and mode = linear
upsample3 = torch.nn.Upsample(scale_factor=2, mode='linear')
output3 = upsample3(X)
print(upsample3,' -->>', output3)


Output:

Input Tensor Shape: torch.Size([1, 1, 2])
Input Tensor: tensor([[[1., 2.]]])
Upsample(scale_factor=2.0, mode=nearest) -->> tensor([[[1., 1., 2., 2.]]])
Upsample(scale_factor=3.0, mode=nearest) -->> tensor([[[1., 1., 1., 2., 2., 2.]]])
Upsample(scale_factor=2.0, mode=linear)  -->> tensor([[[1.0000, 1.2500, 1.7500, 2.0000]]])

EXAMPLE 2:

torch.nn.Upsample is a module in PyTorch that upsamples the input tensor to the given scale factor. Here’s an example of using torch.nn.Upsample to upscale a tensor of shape (1, 1, 2, 3).

Python3




#import the necessary library
import torch
  
# define a tensor and view as a 3D tensor
x = torch.tensor([[1., 2., 3.],
                  [4., 5., 6.],
                 ])
X = x.view(1,1,2,3)
print("Input Tensor Shape:",X.size())
print("Input Tensor:\n",X)
  
# Upsample with scale_factor 2 and mode = nearest
upsample1 = torch.nn.Upsample(scale_factor=2)
output1 = upsample1(X)
print(upsample1,'\n', output1.shape,'\n', output1)
  
# # Upsample with scale_factor 3 and mode = bilinear
upsample2 = torch.nn.Upsample(scale_factor=2, mode='bilinear')
output2 = upsample2(X)
print(upsample2,'\n', output2.shape,'\n', output2)
  
# Upsample with scale_factor 2 and mode = bicubic
upsample3 = torch.nn.Upsample(scale_factor=2, mode='bicubic')
output3 = upsample3(X)
print(upsample3,'\n', output3.shape,'\n', output3)
  
# Upsample with scale_factor 2 and mode = trilinear
upsample4 = torch.nn.Upsample(scale_factor=3, mode='area')
output4 = upsample4(X)
print(upsample4,'\n', output4.shape,'\n', output4)


Output:

Input Tensor Shape: torch.Size([1, 1, 2, 3])
Input Tensor:
 tensor([[[[1., 2., 3.],
          [4., 5., 6.]]]])
Upsample(scale_factor=2.0, mode=nearest) 
 torch.Size([1, 1, 4, 6]) 
 tensor([[[[1., 1., 2., 2., 3., 3.],
          [1., 1., 2., 2., 3., 3.],
          [4., 4., 5., 5., 6., 6.],
          [4., 4., 5., 5., 6., 6.]]]])
Upsample(scale_factor=2.0, mode=bilinear) 
 torch.Size([1, 1, 4, 6]) 
 tensor([[[[1.0000, 1.2500, 1.7500, 2.2500, 2.7500, 3.0000],
          [1.7500, 2.0000, 2.5000, 3.0000, 3.5000, 3.7500],
          [3.2500, 3.5000, 4.0000, 4.5000, 5.0000, 5.2500],
          [4.0000, 4.2500, 4.7500, 5.2500, 5.7500, 6.0000]]]])
Upsample(scale_factor=2.0, mode=bicubic) 
 torch.Size([1, 1, 4, 6]) 
 tensor([[[[0.5781, 0.8750, 1.3516, 2.0156, 2.4922, 2.7891],
          [1.5742, 1.8711, 2.3477, 3.0117, 3.4883, 3.7852],
          [3.2148, 3.5117, 3.9883, 4.6523, 5.1289, 5.4258],
          [4.2109, 4.5078, 4.9844, 5.6484, 6.1250, 6.4219]]]])
Upsample(scale_factor=3.0, mode=area) 
 torch.Size([1, 1, 6, 9]) 
 tensor([[[[1., 1., 1., 2., 2., 2., 3., 3., 3.],
          [1., 1., 1., 2., 2., 2., 3., 3., 3.],
          [1., 1., 1., 2., 2., 2., 3., 3., 3.],
          [4., 4., 4., 5., 5., 5., 6., 6., 6.],
          [4., 4., 4., 5., 5., 5., 6., 6., 6.],
          [4., 4., 4., 5., 5., 5., 6., 6., 6.]]]])

How to Upsample a PyTorch Tensor?

As the amount of data generated by modern sensors and simulations continues to grow, it’s becoming increasingly common for datasets to include multiple channels representing different properties or dimensions. However, in some cases, these channels may be at a lower resolution or spatial/temporal scale than desired for downstream processing or analysis.

Upsampling is a digital signal processing technique used to increase the sample rate of a signal. It involves inserting additional samples between the existing samples in a signal, thereby increasing its resolution. The purpose of upsampling is to improve the quality of a signal by providing more information about its underlying waveform. In upsampling, the original signal is passed through a low-pass filter to remove any high-frequency noise, and then new samples are inserted at regular intervals to increase the sample rate.

Multi-channel refers to a signal that has multiple independent channels of information. For example, a stereo audio signal has two channels: a left channel and a right channel. Each channel carries independent information, such as the sound of a guitar on the left channel and the sound of a drum on the right channel. Multi-channel signals are commonly used in audio and video processing applications. In signal processing, multi-channel signals can be processed independently, or they can be combined to create a single output signal. In this article, we’ll explore how to use PyTorch to upsample a given multi-channel dataset using a variety of techniques.

  • Temporal data refers to data that changes over time, such as a time series of sensor measurements or a sequence of video frames.
  • Spatial data refers to data that has spatial dimensions, such as an image or a 2D heatmap.
  • Volumetric data refers to data that has both spatial dimensions and depth, such as a 3D medical image or a 3D point cloud.

Before we dive into the code, let’s briefly review the basic concepts behind upsampling. At a high level, upsampling involves taking a low-resolution input and producing a higher-resolution output that captures more fine-grained details. There are many different ways to achieve this, but some common techniques include:

  • Bilinear interpolation: This involves computing a weighted average of the neighboring pixels in the input image to estimate the value of a missing pixel in the output image.
  • Transposed convolution: This involves applying a set of learnable filters to the input image and then “unfolding” the output so that it covers a larger area than the input. This can be thought of as the inverse of a normal convolution operation.
  • Nearest-neighbor interpolation: This involves simply copying the value of the nearest pixel in the input image to the corresponding pixel in the output image.
    Now, let’s explore how to implement these techniques using PyTorch.

Similar Reads

Upsampling using torch.nn.Upsample

EXAMPLE 1:...

Upsampling using torch.nn.functional.interpolate

...

Upsampling using torch.nn.ConvTranspose2d

...