自学内容网 自学内容网

YOLO11改进-模块-引入多尺度差异融合模块MDFM

        遥感变化检测(RSCD)专注于识别在不同时间获取的两幅遥感图像之间发生变化的区域。近年来,卷积神经网络(CNN)在具有挑战性的 RSCD 任务中展现出了良好的效果。然而,这些方法未能有效地融合双时相特征,也未提取出对后续 RSCD 任务有益的有用信息。此外,它们在特征聚合中没有考虑多层次特征交互,并且忽略了差异特征与双时相特征之间的关系,从而影响了 RSCD 的结果。为解决上述问题,本文通过孪生卷积网络提取不同层次的双时相特征,然后创建多尺度差异融合模块(MDFM)来融合双时相特征,并以多尺度方式提取包含丰富上下文信息的差异特征。本文考录到YOLO目标检测的neck对特征拼接的时候,没有考虑到不同层之间特征的差异性,以及neck层的多尺度信息的缺少,本文将MDFM模块替换neck层的contact。 

YOLOv11原始模型

 

YOLOv11改进后的图像

1. 多尺度差异融合模块MDFM结构介绍       

        MDFM主要用于融合双时相图像特征并生成带有丰富上下文信息的差异特征,其过程包含以下步骤:
1. 特征提取与差异特征生成
        首先从双时相图像中提取特征f1和f2,然后将f1和f2进行像素级别的相减,再对相减结果取绝对值,最后通过一个3x3的卷积操作得到Di。
2. 多尺度特征融合(MSFF)
        接着利用多尺度特征学习机制来增强特征融合效果。该机制通过不同核大小的卷积构建多尺度融合过程,具体是通过一个名为 MSFF 单元来实现的。MSFF 单元包含四个分支的卷积操作,其中三个分支分别进行特定的卷积融合操作,即[1x1,3x3,1x1][1x1,5x5,1x1]、[1x1,7x7,1x1]、[1x1]。最后将这四个分支的结果进行拼接操作,得到融合后的特征。
3. 元素级通道权重与最终融合差异特征生成
        引入元素级通道权重wi,其计算方式与Fi和Di有关。然后将Wi和前面得到的Mi相加,得到同时融合了多尺度信息的Si。最后通过通道卷积块(CWCB)进行双时相特征融合操作,即将S1和S2逐通道拼接后,先经过3x3深度卷积操作,再乘以wi,从而得到最终融合的差异特征Ci。

2. YOLOv11与MDFM的结合

        1. 本文将MDFM模块替换neck层的contact。

3. MDFM代码部分

import torch
import torch.nn as nn
from torch.nn.functional import relu6


# https://ieeexplore.ieee.org/abstract/document/10504297


class MSFF(nn.Module):
    def __init__(self, inchannel, mid_channel):
        super(MSFF, self).__init__()
        self.conv1 = nn.Sequential(nn.Conv2d(inchannel, inchannel, 1, stride=1, bias=False),
                                   nn.BatchNorm2d(inchannel),
                                   nn.ReLU(inplace=True))
        self.conv2 = nn.Sequential(nn.Conv2d(inchannel, mid_channel, 1, stride=1, bias=False),
                                   nn.BatchNorm2d(mid_channel),
                                   nn.ReLU(inplace=True),
                                   nn.Conv2d(mid_channel, mid_channel, 3, stride=1, padding=1, bias=False),
                                   nn.BatchNorm2d(mid_channel),
                                   nn.ReLU(inplace=True),
                                   nn.Conv2d(mid_channel, inchannel, 1, stride=1, bias=False),
                                   nn.BatchNorm2d(inchannel),
                                   nn.ReLU(inplace=True))
        self.conv3 = nn.Sequential(nn.Conv2d(inchannel, mid_channel, 1, stride=1, bias=False),
                                   nn.BatchNorm2d(mid_channel),
                                   nn.ReLU(inplace=True),
                                   nn.Conv2d(mid_channel, mid_channel, 5, stride=1, padding=2, bias=False),
                                   nn.BatchNorm2d(mid_channel),
                                   nn.ReLU(inplace=True),
                                   nn.Conv2d(mid_channel, inchannel, 1, stride=1, bias=False),
                                   nn.BatchNorm2d(inchannel),
                                   nn.ReLU(inplace=True))
        self.conv4 = nn.Sequential(nn.Conv2d(inchannel, mid_channel, 1, stride=1, bias=False),
                                   nn.BatchNorm2d(mid_channel),
                                   nn.ReLU(inplace=True),
                                   nn.Conv2d(mid_channel, mid_channel, 7, stride=1, padding=3, bias=False),
                                   nn.BatchNorm2d(mid_channel),
                                   nn.ReLU(inplace=True),
                                   nn.Conv2d(mid_channel, inchannel, 1, stride=1, bias=False),
                                   nn.BatchNorm2d(inchannel),
                                   nn.ReLU(inplace=True))
        self.convmix = nn.Sequential(nn.Conv2d(4 * inchannel, inchannel, 1, stride=1, bias=False),
                                   nn.BatchNorm2d(inchannel),
                                   nn.ReLU(inplace=True),
                                   nn.Conv2d(inchannel, inchannel, 3, stride=1, padding=1, bias=False),
                                   nn.BatchNorm2d(inchannel),
                                   nn.ReLU(inplace=True))


    def forward(self, x):

        x1 = self.conv1(x)
        x2 = self.conv2(x)
        x3 = self.conv3(x)
        x4 = self.conv4(x)

        x_f = torch.cat([x1, x2, x3, x4], dim=1)
        out = self.convmix(x_f)

        return out

def autopad(k, p=None, d=1):  # kernel, padding, dilation
    """Pad to 'same' shape outputs."""
    if d > 1:
        k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]  # actual kernel-size
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p


class Conv(nn.Module):
    """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""

    default_act = nn.SiLU()  # default activation

    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
        """Initialize Conv layer with given arguments including activation."""
        super().__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()

    def forward(self, x):
        """Apply convolution, batch normalization and activation to input tensor."""
        return self.act(self.bn(self.conv(x)))

    def forward_fuse(self, x):
        """Perform transposed convolution of 2D data."""
        return self.act(self.conv(x))

class MDFM(nn.Module):
    def __init__(self, in_d, out_d):
        super(MDFM, self).__init__()
        self.in_d = in_d
        self.out_d = out_d
        self.MPFL = MSFF(inchannel=in_d, mid_channel=64)   ##64

        self.conv_diff_enh = nn.Sequential(
            nn.Conv2d(self.in_d, self.in_d, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(self.in_d),
            nn.ReLU(inplace=True)
        )

        self.conv_dr = nn.Sequential(
            nn.Conv2d(self.in_d, self.out_d, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(self.out_d),
            nn.ReLU(inplace=True)
        )

        self.conv_sub = nn.Sequential(
            nn.Conv2d(self.in_d, self.in_d, 3,  padding=1, bias=False),
            nn.BatchNorm2d(self.in_d),
            nn.ReLU(inplace=True),
        )

        self.convmix = nn.Sequential(
            nn.Conv2d(2 * self.in_d, self.in_d, 3, groups=self.in_d, padding=1, bias=False),
            nn.BatchNorm2d(self.in_d),
            nn.ReLU(inplace=True),
        )
        self.conv_up = Conv(int(in_d*0.5), in_d, 1, act=nn.ReLU())


    def forward(self, x):
        # difference enhance
        x1,x2=x[0],x[1]
        b, c, h, w = x1.shape[0], x1.shape[1], x1.shape[2], x1.shape[3]
        x2=self.conv_up(x2)
        x_sub = torch.abs(x1 - x2)
        x_att = torch.sigmoid(self.conv_sub(x_sub))
        x1 = (x1 * x_att) + self.MPFL(self.conv_diff_enh(x1))
        x2 = (x2 * x_att) + self.MPFL(self.conv_diff_enh(x2))


        # fusion
        x_f = torch.stack((x1, x2), dim=2)
        x_f = torch.reshape(x_f, (b, -1, h, w))
        x_f = self.convmix(x_f)

        # after ca
        x_f = x_f * x_att
        out = self.conv_dr(x_f)

        return out

if __name__ == '__main__':
    x1 = torch.randn((32, 512, 8, 8))
    x2 = torch.randn((32, 256, 8, 8))
    model = MDFM(512, 64)
    out = model(x1, x2)
    print(out.shape)

 4. 将FDFM Block引入到YOLOv11中

第一: 将下面的核心代码复制到D:\model\yolov11\ultralytics\change_model路径下,如下图所示。

第二:在task.py中导入FDFM包

第三:在task.py中的模型配置部分下面代码

     

第四:将模型配置文件复制到YOLOV11.YAMY文件中

        第一个改进的配置文件

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs

# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 2, C2PSA, [1024]] # 10

# YOLO11n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1,MDFM, [256,384]]# cat backbone P4
  - [-1, 2, C3k2, [512, False]] # 13

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]] # cat head P4
  - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]] # cat head P5
  - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)

  - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)

     第五:运行成功


from ultralytics.models import NAS, RTDETR, SAM, YOLO, FastSAM, YOLOWorld

if __name__=="__main__":

    # 使用YOLOv11.yamy文件搭建的模型训练
    model = YOLO(r"D:\model\yolov11\ultralytics\cfg\models\11\yolo11_MDFM.yaml")  # build a new model from YAML
    model.train(data=r'D:\model\yolov11\ultralytics\cfg\datasets\VOC_my.yaml',
                          epochs=300, imgsz=640, batch=64
                # , close_mosaic=10
                )


原文地址:https://blog.csdn.net/qq_64693987/article/details/144268506

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!