您的位置 首页 知识

Pytorch之上_下采样函数torch.nn.functional.inter

Pytorch之上/下采样函数torch.nn.functional.inter 目录 Pytorch上/下采…

Pytorch之上/下采样函数torch.nn.functional.inter

目录
  • Pytorch上/下采样函数torch.nn.functional.interpolate插值
    • 1. upsample/downsample 3D tensor
    • 2. upsample/downsample 4D tensor
    • 3. upsample/downsample 5D tensor
  • 拓展资料

    Pytorch上/下采样函数torch.nn.functional.interpolate插值

    torch.nn.functional.interpolate(input_tensor, size=None, scale_factor=8, mode=’bilinear’, align_corners=False)”’Down/up samples the input to either the given size or the given scale_factorThe algorithm used for interpolation is determined by mode.Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width.The modes available for resizing are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only), area”’

    这个函数是用来上采样下采样tensor的空间维度(h,w)

    input_tensor支持输入3D (b, c, w)或(batch,seq_len,dim)、4D (b, c, h, w)、5D (b, c, f, h, w)的 tensor shape。其中b表示batch_size,c表示channel,f表示frames,h表示height,w表示weight。

    size是目标tensor的(w)/(h,w)/(f,h,w)的形状;scale_factor是采样tensor的saptial shape(w)/(h,w)/(f,h,w)的缩放系数,sizescale_factor两个参数只能定义一个,具体是上采样,还是下采样根据这两个参数判断。如果size或者scale_factorlist序列,则必须匹配输入的大致。

    • 如果输入3D,则它们的序列长度必须是1(只缩放最终1个维度w)。
    • 如果输入4D,则它们的序列长度必须是2(缩放最终2个维度h,w)。
    • 如果输入是5D,则它们的序列长度必须是3(缩放最终3个维度f,h,w)。

    插值算法mode可选:最近邻(nearest, 默认)线性(linear, 3D-only)双线性(bilinear, 4D-only)三线性(trilinear, 5D-only)等等。

    是否align_corners对齐角点:可选的bool值, 如果 align_corners=True,则对齐 input 和 output 的角点像素(corner pixels),保持在角点像素的值. 只会对 mode=linear, bilinear, trilinear 有影响. 默认是 False。一图看懂align_corners=TrueFalse的区别,从4×4上采样成8×8。

    一个是按四角的像素点中心对齐,另一个是按四角的像素角点对齐:

    import torchimport torch.nn.functional as Fb, c, f, h, w = 1, 3, 8, 64, 64

    1. upsample/downsample 3D tensor

    interpolate 3D tensorx = torch.randn([b, c, w]) downsample to (b, c, w/2)y0 = F.interpolate(x, scale_factor=0.5, mode=’nearest’)y1 = F.interpolate(x, size=[w//2], mode=’nearest’)y2 = F.interpolate(x, scale_factor=0.5, mode=’linear’) only 3Dy3 = F.interpolate(x, size=[w//2], mode=’linear’) only 3Dprint(y0.shape, y1.shape, y2.shape, y3.shape) torch.Size([1, 3, 32]) torch.Size([1, 3, 32]) torch.Size([1, 3, 32]) torch.Size([1, 3, 32]) upsample to (b, c, w*2)y0 = F.interpolate(x, scale_factor=2, mode=’nearest’)y1 = F.interpolate(x, size=[w*2], mode=’nearest’)y2 = F.interpolate(x, scale_factor=2, mode=’linear’) only 3Dy3 = F.interpolate(x, size=[w*2], mode=’linear’) only 3Dprint(y0.shape, y1.shape, y2.shape, y3.shape) torch.Size([1, 3, 128]) torch.Size([1, 3, 128]) torch.Size([1, 3, 128]) torch.Size([1, 3, 128])

    2. upsample/downsample 4D tensor

    interpolate 4D tensorx = torch.randn(b, c, h, w) downsample to (b, c, h/2, w/2)y0 = F.interpolate(x, scale_factor=0.5, mode=’nearest’)y1 = F.interpolate(x, size=[h//2, w//2], mode=’nearest’)y2 = F.interpolate(x, scale_factor=0.5, mode=’bilinear’) only 4Dy3 = F.interpolate(x, size=[h//2, w//2], mode=’bilinear’) only 4Dprint(y0.shape, y1.shape, y2.shape, y3.shape) torch.Size([1, 3, 32, 32]) torch.Size([1, 3, 32, 32]) torch.Size([1, 3, 32, 32]) torch.Size([1, 3, 32, 32]) upsample to (b, c, h*2, w*2)y0 = F.interpolate(x, scale_factor=2, mode=’nearest’)y1 = F.interpolate(x, size=[h*2, w*2], mode=’nearest’)y2 = F.interpolate(x, scale_factor=2, mode=’bilinear’) only 4Dy3 = F.interpolate(x, size=[h*2, w*2], mode=’bilinear’) only 4Dprint(y0.shape, y1.shape, y2.shape, y3.shape) torch.Size([1, 3, 128, 128]) torch.Size([1, 3, 128, 128]) torch.Size([1, 3, 128, 128]) torch.Size([1, 3, 128, 128])

    3. upsample/downsample 5D tensor

    interpolate 5D tensorx = torch.randn(b, c, f, h, w) downsample to (b, c, f/2, h/2, w/2)y0 = F.interpolate(x, scale_factor=0.5, mode=’nearest’)y1 = F.interpolate(x, size=[f//2, h//2, w//2], mode=’nearest’)y2 = F.interpolate(x, scale_factor=2, mode=’trilinear’) only 5Dy3 = F.interpolate(x, size=[f//2, h//2, w//2], mode=’trilinear’) only 5Dprint(y0.shape, y1.shape, y2.shape, y3.shape) torch.Size([1, 3, 4, 32, 32]) torch.Size([1, 3, 4, 32, 32]) torch.Size([1, 3, 16, 128, 128]) torch.Size([1, 3, 4, 32, 32]) upsample to (b, c, f*2, h*2, w*2)y0 = F.interpolate(x, scale_factor=2, mode=’nearest’)y1 = F.interpolate(x, size=[f*2, h*2, w*2], mode=’nearest’)y2 = F.interpolate(x, scale_factor=2, mode=’trilinear’) only 5Dy3 = F.interpolate(x, size=[f*2, h*2, w*2], mode=’trilinear’) only 5Dprint(y0.shape, y1.shape, y2.shape, y3.shape) torch.Size([1, 3, 16, 128, 128]) torch.Size([1, 3, 16, 128, 128]) torch.Size([1, 3, 16, 128, 128]) torch.Size([1, 3, 16, 128, 128])

    拓展资料

    以上为个人经验,希望能给大家一个参考,也希望大家多多支持风君子博客。

    无论兄弟们可能感兴趣的文章:

    • Pytorch上下采样函数之F.interpolate数组采样操作详解
    • Pytorch上下采样函数&8211;interpolate用法
    • Pytorch精准记录函数运行时刻的技巧
    • pytorch怎样自定义forward和backward函数
    • pytorch基础之损失函数与反向传播详解
    • pytorchtorch.gather函数的使用
    版权声明
    返回顶部