Advertisement

每日Attention学习5——Multi-Scale Channel Attention Module

阅读量:
模块出处

[link] [

复制代码
](https://github.com/YimianDai/open-aff) [WACV 21] Attentional Feature Fusion

* * *

##### 模块名称

Multi-Scale Channel Attention Module (MS-CAM)

* * *

##### 模块作用

通道注意力

* * *

##### 模块结构
![在这里插入图片描述](https://ad.itadn.com/c/weblog/blog-img/images/2025-07-13/STKtaDrZnJigMLp6bs0mcBHf89wC.jpeg)

* * *

##### 模块代码
复制代码
import torch
import torch.nn as nn


class MS_CAM(nn.Module):

def __init__(self, channels=64, r=4):
    super(MS_CAM, self).__init__()
    inter_channels = int(channels // r)

    self.local_att = nn.Sequential(
        nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
        nn.BatchNorm2d(inter_channels),
        nn.ReLU(inplace=True),
        nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
        nn.BatchNorm2d(channels),
    )

    self.global_att = nn.Sequential(
        nn.AdaptiveAvgPool2d(1),
        nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
        nn.BatchNorm2d(inter_channels),
        nn.ReLU(inplace=True),
        nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
        nn.BatchNorm2d(channels),
    )

    self.sigmoid = nn.Sigmoid()

def forward(self, x):
    xl = self.local_att(x)
    xg = self.global_att(x)
    xlg = xl + xg
    wei = self.sigmoid(xlg)
    return x * wei


if __name__ == '__main__':
x = torch.randn([3, 256, 16, 16])
ms_sam = MS_CAM(channels=256)
out = ms_sam(x)
print(out.shape)  # 3, 256, 16, 16


AI生成项目python
复制代码
* * *

##### 原文表述

MS-CAM的核心思想在于,通过改变空间池化的大小,可以在多个尺度上实现通道注意力。为了尽可能保持轻量级,我们只是在注意力模块内将局部上下文添加到全局上下文中。我们选择点卷积(PointWise Conv)作为局部通道上下文融合器,它只利用每个空间位置的点级通道交互。

全部评论 (0)

还没有任何评论哟~