PyTorch之圖像和Tensor填充的實例
在PyTorch中可以對圖像和Tensor進行填充,如常量值填充,鏡像填充和復制填充等。在圖像預處理階段設置圖像邊界填充的方式如下:
import vision.torchvision.transforms as transforms
img_to_pad = transforms.Compose([
transforms.Pad(padding=2, padding_mode='symmetric'),
transforms.ToTensor(),
])
對Tensor進行填充的方式如下:
import torch.nn.functional as F feature = feature.unsqueeze(0).unsqueeze(0) avg_feature = F.pad(feature, pad = [1, 1, 1, 1], mode='replicate')
這里需要注意一點的是,transforms.Pad只能對PIL圖像格式進行填充,而F.pad可以對Tensor進行填充,目前F.pad不支持對2D Tensor進行填充,可以通過unsqueeze擴展為4D Tensor進行填充。
F.pad的部分源碼如下:
@torch._jit_internal.weak_script
def pad(input, pad, mode='constant', value=0):
# type: (Tensor, List[int], str, float) -> Tensor
r"""Pads tensor.
Pading size:
The number of dimensions to pad is :math:`\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor`
and the dimensions that get padded begins with the last dimension and moves forward.
For example, to pad the last dimension of the input tensor, then `pad` has form
`(padLeft, padRight)`; to pad the last 2 dimensions of the input tensor, then use
`(padLeft, padRight, padTop, padBottom)`; to pad the last 3 dimensions, use
`(padLeft, padRight, padTop, padBottom, padFront, padBack)`.
Padding mode:
See :class:`torch.nn.ConstantPad2d`, :class:`torch.nn.ReflectionPad2d`, and
:class:`torch.nn.ReplicationPad2d` for concrete examples on how each of the
padding modes works. Constant padding is implemented for arbitrary dimensions.
Replicate padding is implemented for padding the last 3 dimensions of 5D input
tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of
3D input tensor. Reflect padding is only implemented for padding the last 2
dimensions of 4D input tensor, or the last dimension of 3D input tensor.
.. include:: cuda_deterministic_backward.rst
Args:
input (Tensor): `Nd` tensor
pad (tuple): m-elem tuple, where :math:`\frac{m}{2} \leq` input dimensions and :math:`m` is even.
mode: 'constant', 'reflect' or 'replicate'. Default: 'constant'
value: fill value for 'constant' padding. Default: 0
Examples::
>>> t4d = torch.empty(3, 3, 4, 2)
>>> p1d = (1, 1) # pad last dim by 1 on each side
>>> out = F.pad(t4d, p1d, "constant", 0) # effectively zero padding
>>> print(out.data.size())
torch.Size([3, 3, 4, 4])
>>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2)
>>> out = F.pad(t4d, p2d, "constant", 0)
>>> print(out.data.size())
torch.Size([3, 3, 8, 4])
>>> t4d = torch.empty(3, 3, 4, 2)
>>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3)
>>> out = F.pad(t4d, p3d, "constant", 0)
>>> print(out.data.size())
torch.Size([3, 9, 7, 3])
"""
assert len(pad) % 2 == 0, 'Padding length must be divisible by 2'
assert len(pad) // 2 <= input.dim(), 'Padding length too large'
if mode == 'constant':
ret = _VF.constant_pad_nd(input, pad, value)
else:
assert value == 0, 'Padding mode "{}"" doesn\'t take in value argument'.format(mode)
if input.dim() == 3:
assert len(pad) == 2, '3D tensors expect 2 values for padding'
if mode == 'reflect':
ret = torch._C._nn.reflection_pad1d(input, pad)
elif mode == 'replicate':
ret = torch._C._nn.replication_pad1d(input, pad)
else:
ret = input # TODO: remove this when jit raise supports control flow
raise NotImplementedError
elif input.dim() == 4:
assert len(pad) == 4, '4D tensors expect 4 values for padding'
if mode == 'reflect':
ret = torch._C._nn.reflection_pad2d(input, pad)
elif mode == 'replicate':
ret = torch._C._nn.replication_pad2d(input, pad)
else:
ret = input # TODO: remove this when jit raise supports control flow
raise NotImplementedError
elif input.dim() == 5:
assert len(pad) == 6, '5D tensors expect 6 values for padding'
if mode == 'reflect':
ret = input # TODO: remove this when jit raise supports control flow
raise NotImplementedError
elif mode == 'replicate':
ret = torch._C._nn.replication_pad3d(input, pad)
else:
ret = input # TODO: remove this when jit raise supports control flow
raise NotImplementedError
else:
ret = input # TODO: remove this when jit raise supports control flow
raise NotImplementedError("Only 3D, 4D, 5D padding with non-constant padding are supported for now")
return ret
以上這篇PyTorch之圖像和Tensor填充的實例就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持腳本之家。
- 詳解PyTorch手寫數(shù)字識別(MNIST數(shù)據(jù)集)
- PyTorch CNN實戰(zhàn)之MNIST手寫數(shù)字識別示例
- pytorch cnn 識別手寫的字實現(xiàn)自建圖片數(shù)據(jù)
- 使用pytorch進行圖像的順序讀取方法
- PyTorch讀取Cifar數(shù)據(jù)集并顯示圖片的實例講解
- pytorch + visdom CNN處理自建圖片數(shù)據(jù)集的方法
- pytorch 把MNIST數(shù)據(jù)集轉(zhuǎn)換成圖片和txt的方法
- pytorch 數(shù)據(jù)集圖片顯示方法
- 畫pytorch模型圖,以及參數(shù)計算的方法
- Pytorch實現(xiàn)的手寫數(shù)字mnist識別功能完整示例
相關文章
Celery批量異步調(diào)用任務一直等待結(jié)果問題
這篇文章主要介紹了Celery批量異步調(diào)用任務一直等待結(jié)果問題,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教2023-11-11
樹莓派與PC端在局域網(wǎng)內(nèi)運用python實現(xiàn)即時通訊
這篇文章主要為大家詳細介紹了樹莓派與PC端在局域網(wǎng)內(nèi)運用python實現(xiàn)即時通訊,具有一定的參考價值,感興趣的小伙伴們可以參考一下2019-06-06
python常用函數(shù)random()函數(shù)詳解
這篇文章主要介紹了python常用函數(shù)random()函數(shù),本文通過實例代碼給大家介紹的非常詳細,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下2023-02-02
如何將anaconda安裝配置的mmdetection環(huán)境離線拷貝到另一臺電腦
這篇文章主要介紹了如何將anaconda安裝配置的mmdetection環(huán)境離線拷貝到另一臺電腦,本文給大家介紹的非常詳細,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下2020-10-10
使用Python中的greenlet包實現(xiàn)并發(fā)編程的入門教程
這篇文章主要介紹了使用Python中的greenlet包實現(xiàn)并發(fā)編程的入門教程,Python由于GIL的存在并不能實現(xiàn)真正的多線程并發(fā),greenlet可以做到一個相對的替換方案,需要的朋友可以參考下2015-04-04
Python2和Python3.6環(huán)境解決共存問題
這篇文章主要介紹了Python2和Python3.6環(huán)境解決共存問題,需要的朋友可以參考下2018-11-11

