一区二区三区在线-一区二区三区亚洲视频-一区二区三区亚洲-一区二区三区午夜-一区二区三区四区在线视频-一区二区三区四区在线免费观看

腳本之家,腳本語言編程技術及教程分享平臺!
分類導航

Python|VBS|Ruby|Lua|perl|VBA|Golang|PowerShell|Erlang|autoit|Dos|bat|

服務器之家 - 腳本之家 - Python - pytorch中的上采樣以及各種反操作,求逆操作詳解

pytorch中的上采樣以及各種反操作,求逆操作詳解

2020-05-11 09:37一只tobey Python

今天小編就為大家分享一篇pytorch中的上采樣以及各種反操作,求逆操作詳解,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧

import torch.nn.functional as F

import torch.nn as nn

F.upsample(input, size=None, scale_factor=None,mode='nearest', align_corners=None)

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
r"""Upsamples the input to either the given :attr:`size` or the given
:attr:`scale_factor`
The algorithm used for upsampling is determined by :attr:`mode`.
Currently temporal, spatial and volumetric upsampling are supported, i.e.
expected inputs are 3-D, 4-D or 5-D in shape.
The input dimensions are interpreted in the form:
`mini-batch x channels x [optional depth] x [optional height] x width`.
The modes available for upsampling are: `nearest`, `linear` (3D-only),
`bilinear` (4D-only), `trilinear` (5D-only)
Args:
  input (Tensor): the input tensor
  size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]):
    output spatial size.
  scale_factor (int): multiplier for spatial size. Has to be an integer.
  mode (string): algorithm used for upsampling:
    'nearest' | 'linear' | 'bilinear' | 'trilinear'. Default: 'nearest'
  align_corners (bool, optional): if True, the corner pixels of the input
    and output tensors are aligned, and thus preserving the values at
    those pixels. This only has effect when :attr:`mode` is `linear`,
    `bilinear`, or `trilinear`. Default: False
.. warning::
  With ``align_corners = True``, the linearly interpolating modes
  (`linear`, `bilinear`, and `trilinear`) don't proportionally align the
  output and input pixels, and thus the output values can depend on the
  input size. This was the default behavior for these modes up to version
  0.3.1. Since then, the default behavior is ``align_corners = False``.
  See :class:`~torch.nn.Upsample` for concrete examples on how this
  affects the outputs.
"""

nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)

?
1
2
3
4
5
6
7
8
9
10
11
12
"""
Parameters:
  in_channels (int) – Number of channels in the input image
  out_channels (int) – Number of channels produced by the convolution
  kernel_size (int or tuple) – Size of the convolving kernel
  stride (int or tuple, optional) – Stride of the convolution. Default: 1
  padding (int or tuple, optional) – kernel_size - 1 - padding zero-padding will be added to both sides of each dimension in the input. Default: 0
  output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
  groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
  bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
  dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
"""

計算方式:

pytorch中的上采樣以及各種反操作,求逆操作詳解

定義:nn.MaxUnpool2d(kernel_size, stride=None, padding=0)

調用:

?
1
2
3
def forward(self, input, indices, output_size=None):
  return F.max_unpool2d(input, indices, self.kernel_size, self.stride,
             self.padding, output_size)
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
r"""Computes a partial inverse of :class:`MaxPool2d`.
:class:`MaxPool2d` is not fully invertible, since the non-maximal values are lost.
:class:`MaxUnpool2d` takes in as input the output of :class:`MaxPool2d`
including the indices of the maximal values and computes a partial inverse
in which all non-maximal values are set to zero.
.. note:: `MaxPool2d` can map several input sizes to the same output sizes.
     Hence, the inversion process can get ambiguous.
     To accommodate this, you can provide the needed output size
     as an additional argument `output_size` in the forward call.
     See the Inputs and Example below.
Args:
  kernel_size (int or tuple): Size of the max pooling window.
  stride (int or tuple): Stride of the max pooling window.
    It is set to ``kernel_size`` by default.
  padding (int or tuple): Padding that was added to the input
Inputs:
  - `input`: the input Tensor to invert
  - `indices`: the indices given out by `MaxPool2d`
  - `output_size` (optional) : a `torch.Size` that specifies the targeted output size
Shape:
  - Input: :math:`(N, C, H_{in}, W_{in})`
  - Output: :math:`(N, C, H_{out}, W_{out})` where
計算公式:見下面
Example: 見下面
"""

pytorch中的上采樣以及各種反操作,求逆操作詳解

F. max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)

見上面的用法一致!

?
1
2
3
4
5
6
def max_unpool2d(input, indices, kernel_size, stride=None, padding=0,
         output_size=None):
  r"""Computes a partial inverse of :class:`MaxPool2d`.
  See :class:`~torch.nn.MaxUnpool2d` for details.
  """
  pass

以上這篇pytorch中的上采樣以及各種反操作,求逆操作詳解就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支持服務器之家。

原文鏈接:https://blog.csdn.net/zz2230633069/article/details/83279626

延伸 · 閱讀

精彩推薦
主站蜘蛛池模板: 欧美视频精品一区二区三区 | 欧美日韩国产成人精品 | 美女日b视频 | 久久全国免费久久青青小草 | 国产高清经典露脸3p | 精品国产自在现线拍400部 | 欧美无专区 | 好大夫在线个人空间 | 欧美区一区 | 精品99在线观看 | 日韩精品在线一区二区 | 继的朋友无遮漫画免费观看73 | 91精品乱码一区二区三区 | 久久亚洲电影www电影网 | 5g影院天天影院天天爽影院网站 | 午夜在线观看免费观看 视频 | 秋霞啪啪片| 成人午夜视频一区二区国语 | 国产在线观看人成激情视频 | 十八女下面流水不遮免费 | 天仙tv微福视频 | 午夜亚洲精品久久久久久 | 天天久久影视色香综合网 | 91在线老王精品免费播放 | 调教女帝 | 成人嗯啊视频在线观看 | 8x8x丝袜美女 | 婷婷久久综合 | 国产乱叫456在线 | 美女扒开腿让男生捅 | 奇米7777第四色 | 亚洲第一色网 | 国产精品毛片高清在线完整版 | 91无毒不卡 | 女人叉开腿让男人桶 | 久久免费看少妇级毛片蜜臀 | 国产精品一在线观看 | 午夜影院免费体验 | 亚洲AV久久无码精品九号软件 | 日韩高清一区 | 色狠狠婷婷97|