pytorch中的上采樣以及各種反操作,求逆操作詳解
import torch.nn.functional as F
import torch.nn as nn
F.upsample(input, size=None, scale_factor=None,mode='nearest', align_corners=None)
r"""Upsamples the input to either the given :attr:`size` or the given
:attr:`scale_factor`
The algorithm used for upsampling is determined by :attr:`mode`.
Currently temporal, spatial and volumetric upsampling are supported, i.e.
expected inputs are 3-D, 4-D or 5-D in shape.
The input dimensions are interpreted in the form:
`mini-batch x channels x [optional depth] x [optional height] x width`.
The modes available for upsampling are: `nearest`, `linear` (3D-only),
`bilinear` (4D-only), `trilinear` (5D-only)
Args:
input (Tensor): the input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]):
output spatial size.
scale_factor (int): multiplier for spatial size. Has to be an integer.
mode (string): algorithm used for upsampling:
'nearest' | 'linear' | 'bilinear' | 'trilinear'. Default: 'nearest'
align_corners (bool, optional): if True, the corner pixels of the input
and output tensors are aligned, and thus preserving the values at
those pixels. This only has effect when :attr:`mode` is `linear`,
`bilinear`, or `trilinear`. Default: False
.. warning::
With ``align_corners = True``, the linearly interpolating modes
(`linear`, `bilinear`, and `trilinear`) don't proportionally align the
output and input pixels, and thus the output values can depend on the
input size. This was the default behavior for these modes up to version
0.3.1. Since then, the default behavior is ``align_corners = False``.
See :class:`~torch.nn.Upsample` for concrete examples on how this
affects the outputs.
"""
nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)
""" Parameters: in_channels (int) – Number of channels in the input image out_channels (int) – Number of channels produced by the convolution kernel_size (int or tuple) – Size of the convolving kernel stride (int or tuple, optional) – Stride of the convolution. Default: 1 padding (int or tuple, optional) – kernel_size - 1 - padding zero-padding will be added to both sides of each dimension in the input. Default: 0 output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0 groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional) – If True, adds a learnable bias to the output. Default: True dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 """
計算方式:

定義:nn.MaxUnpool2d(kernel_size, stride=None, padding=0)
調(diào)用:
def forward(self, input, indices, output_size=None):
return F.max_unpool2d(input, indices, self.kernel_size, self.stride,
self.padding, output_size)
r"""Computes a partial inverse of :class:`MaxPool2d`.
:class:`MaxPool2d` is not fully invertible, since the non-maximal values are lost.
:class:`MaxUnpool2d` takes in as input the output of :class:`MaxPool2d`
including the indices of the maximal values and computes a partial inverse
in which all non-maximal values are set to zero.
.. note:: `MaxPool2d` can map several input sizes to the same output sizes.
Hence, the inversion process can get ambiguous.
To accommodate this, you can provide the needed output size
as an additional argument `output_size` in the forward call.
See the Inputs and Example below.
Args:
kernel_size (int or tuple): Size of the max pooling window.
stride (int or tuple): Stride of the max pooling window.
It is set to ``kernel_size`` by default.
padding (int or tuple): Padding that was added to the input
Inputs:
- `input`: the input Tensor to invert
- `indices`: the indices given out by `MaxPool2d`
- `output_size` (optional) : a `torch.Size` that specifies the targeted output size
Shape:
- Input: :math:`(N, C, H_{in}, W_{in})`
- Output: :math:`(N, C, H_{out}, W_{out})` where
計算公式:見下面
Example: 見下面
"""

F. max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)
見上面的用法一致!
def max_unpool2d(input, indices, kernel_size, stride=None, padding=0,
output_size=None):
r"""Computes a partial inverse of :class:`MaxPool2d`.
See :class:`~torch.nn.MaxUnpool2d` for details.
"""
pass
以上這篇pytorch中的上采樣以及各種反操作,求逆操作詳解就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關(guān)文章
python sqlite3 判斷cursor的結(jié)果是否為空的案例
這篇文章主要介紹了python sqlite3 判斷cursor的結(jié)果是否為空的案例,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2021-03-03
pytorch LayerNorm參數(shù)的用法及計算過程
這篇文章主要介紹了pytorch LayerNorm參數(shù)的用法及計算過程,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教2021-05-05
python如何獲取apk的packagename和activity
這篇文章主要介紹了python如何獲取apk的packagename和activity,文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友可以參考下2020-01-01
python使用pygame模塊實現(xiàn)坦克大戰(zhàn)游戲
這篇文章主要為大家詳細介紹了python使用pygame模塊實現(xiàn)坦克大戰(zhàn)游戲,文中示例代碼介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們可以參考一下2019-05-05
解決pandas read_csv 讀取中文列標題文件報錯的問題
今天小編就為大家分享一篇解決pandas read_csv 讀取中文列標題文件報錯的問題,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2018-06-06
python異步的ASGI與Fast Api實現(xiàn)
本文主要介紹了python異步的ASGI與Fast Api實現(xiàn),文中通過示例代碼介紹的非常詳細,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2021-07-07

