自学内容网 自学内容网

卷积和add、sub、mean等的转换


一、1*1卷积

1*1卷积的主要作用有以下几点:

  • 降维( dimension reductionality )。比如,一张500 * 500且厚度depth为100的图片在20个filter上做11的卷积,那么结果的大小为500500*20。
  • 加入非线性。卷积层之后经过激励层,1*1的卷积在前一层的学习表示上添加了非线性激励提升网络的表达能力;

二、1*1卷积代替sum

  • 限定条件1:sum在通道做累加
  • 限定条件2:卷积核大小是1*1,权重是全1矩阵。

1.torch.sum用法

torch.sum() 函数用于计算张量中元素的总和。

torch.sum(input, dim=None, keepdim=False, dtype=None)
"""
input:输入张量,包含要进行求和操作的元素。
dim:指定在哪个维度上进行求和操作。如果不指定,则对整个张量的所有元素进行求和。
keepdim:默认为 False,如果设置为 True,则保持输出张量中的求和维度。
dtype:指定输出张量的数据类型。
"""
import torch

# 创建一个示例张量
x = torch.tensor([
    [[1, 2, 3, 4],
     [5, 6, 7, 8],
     [9, 10, 11, 12]],
    
    [[13, 14, 15, 16],
     [17, 18, 19, 20],
     [21, 22, 23, 24]]
])

# 沿着不同维度进行求和操作
sum_dim0 = torch.sum(x, dim=0)
sum_dim1 = torch.sum(x, dim=1)
sum_dim2 = torch.sum(x, dim=2)

print("Original tensor:")
print(x)

print("\nSum along dimension 0:")
print(sum_dim0)

print("\nSum along dimension 1:")
print(sum_dim1)

print("\nSum along dimension 2:")
print(sum_dim2)
Original tensor:
shape: torch.Size([2, 3, 4])
tensor([[[ 1,  2,  3,  4],
         [ 5,  6,  7,  8],
         [ 9, 10, 11, 12]],

        [[13, 14, 15, 16],
         [17, 18, 19, 20],
         [21, 22, 23, 24]]])

Sum along dimension 0:
shape: torch.Size([3, 4])
tensor([[14, 16, 18, 20],
        [22, 24, 26, 28],
        [30, 32, 34, 36]])

Sum along dimension 1:
shape: torch.Size([2, 4])
tensor([[15, 18, 21, 24],
        [51, 54, 57, 60]])

Sum along dimension 2:
shape: torch.Size([2, 3])
tensor([[10, 26, 42],
        [58, 74, 90]])

2. 1*1卷积代替sum

import torch
import torch.nn.functional as F
# 定义输入张量
input_tensor = torch.randn(1, 376,1, 376)
# 定义权重张量
weight = torch.ones(1, 376, 1, 1, dtype=torch.float32)
conv_result = F.conv2d(input_tensor, weight, stride=1, padding=0, groups=1)
conv_result = conv_result.squeeze(-1)
print("Original input tensor shape:", input_tensor.shape)
print("Convolution result shape:", conv_result.shape)
print("Convolution result:", conv_result)
# 对比使用 torch.sum 进行验证
sum_result = torch.sum(input_tensor, dim=1, keepdim=True)
print("Sum result:", sum_result)

三、1*1卷积代替mean

  • 限定条件1:计算整个张量的平均值,不指定维度参数
  • 限定条件2:权重矩阵全1

1.torch.mean用法

import torch

# 创建一个示例张量
x = torch.tensor([[1.0, 2.0], [3.0, 4.0]])

# 计算整个张量的平均值
mean_all = torch.mean(x)

# 沿着指定的维度计算平均值
mean_dim0 = torch.mean(x, dim=0)
mean_dim1 = torch.mean(x, dim=1)

# 打印张量及其形状
print("Tensor x:")
print(x)
print("Shape of tensor x:", x.shape)

print("\nMean of all elements in x:")
print(mean_all)

print("\nMean along dimension 0:")
print(mean_dim0)
print("Shape of mean along dim 0:", mean_dim0.shape)

print("\nMean along dimension 1:")
print(mean_dim1)
print("Shape of mean along dim 1:", mean_dim1.shape)
Tensor x:
tensor([[1., 2.],
        [3., 4.]])
Shape of tensor x: torch.Size([2, 2])

Mean of all elements in x:
tensor(2.5000)

Mean along dimension 0:
tensor([2., 3.])
Shape of mean along dim 0: torch.Size([2])

Mean along dimension 1:
tensor([1.5000, 3.5000])
Shape of mean along dim 1: torch.Size([2])

2. 卷积代替mean

import torch
import torch.nn.functional as F
import numpy as np
# 定义输入张量
input_tensor = torch.randn(1, 1, 6, 20)
# 将输入张量重塑为 (N, H, 6, 20)
# 定义权重张量
weight = torch.ones(1, 1, 6, 20, dtype=torch.float32) / 120
# 使用卷积操作模拟平均
# 卷积核尺寸为 (1, 1, 6, 20),步长为 1
conv_result = F.conv2d(input_tensor, weight, stride=1, padding=0, groups=1)
# 将卷积结果重塑为 (N, H, 1)
conv_result = conv_result.squeeze(-1)
print("Original input tensor shape:", input_tensor.shape)
print("Reshaped input tensor shape:", input_tensor.shape)
print("Convolution result shape:", conv_result.shape)
print("Convolution result:", conv_result)
# 对比使用 torch.mean 进行验证
mean_result = torch.mean(input_tensor)
print("Mean result:", mean_result)
Original input tensor shape: torch.Size([1, 1, 6, 20])
Reshaped input tensor shape: torch.Size([1, 1, 6, 20])
Convolution result shape: torch.Size([1, 1, 1])
Convolution result: tensor([[[-0.0269]]])
Mean result: tensor(-0.0269)

三、卷积代替add

单位矩阵(Identity Matrix)在卷积中可以用作一种特殊的卷积核,其效果类似于对输入进行无操作(相当于恒等变换)。

import torch
import torch.nn.functional as F
import numpy as np

# 定义输入张量
lsh = torch.randn(1, 120, 376, 376)
rsh = torch.randn(1, 1, 376, 376)
add_result = torch.add(lsh, rsh)
print("Original add result:", add_result)

# 定义广播和卷积函数
def process_add(lsh, rsh):
    lsh_shape = lsh.shape
    rsh_shape = rsh.shape
    
    if rsh_shape[1] == 1 and lsh_shape[1] != 1:
        weight = np.ones(lsh_shape[1], 'float32').reshape(lsh_shape[1], 1, 1, 1)
        weight = torch.tensor(weight)
        rsh = F.conv2d(rsh, weight, stride=1, padding=0, groups=1)
    

    conv_input = torch.cat([lsh, rsh], dim=1)

    
    weight_param = np.eye(120, dtype='float32').reshape(120, 120, 1, 1)
    import pdb; pdb.set_trace()
    weight_param = np.concatenate([weight_param, weight_param], axis=1)

    weight_param = torch.tensor(weight_param)
    
    new_conv = F.conv2d(conv_input, weight_param, stride=1, padding=0, dilation=[1, 1])
    new_conv = new_conv.view(1, 120, 376, 376)
    
    return new_conv

# 处理加法操作
result = process_add(lsh, rsh)
# 打印结果
print("Original lsh shape:", lsh.shape)
print("Original rsh shape:", rsh.shape)
print("Processed result shape:", result.shape)
print("Processed result:", result)

原文地址:https://blog.csdn.net/qq_44815135/article/details/142354418

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!