Advertisement

深度学习论文: CenterMask : Real-Time Anchor-Free Instance Segmentation及其PyTorch实现

阅读量:

该系统采用基于中心mask的技术,在实时 anchor-free的方式下实现目标实例分割(Real-Time Anchor-Free Instance Segmentation)。其PDF版本链接为: https://arxiv.org/pdf/1911.06667.pdf 。其GitHub仓库中的PyTorch实现代码可访问以下链接: https://github.com/shanglianlm0525/PyTorch-Networks

相关文章链接:
VoVNet 深度学习领域的研究: Efficient Backbone Network for Object Detection及其PyTorch实现
VoVNetV2 深度学习领域的研究: CenterMask : 技术突破性 实现Real-Time Anchor-Free Instance Segmentation及其PyTorch实现

1 概述

基于FOCS检测的结果,在其基础上引入一个新的SAG-Mask(基于空间注意力引导的掩膜)分支,并模仿Mask RCNN的方法以类似的方式实现分割效果。

2 CenterMask

在这里插入图片描述

2-1 Adaptive RoI Assignment Function (自适应的RoI分配机制)

Mask RCNN网络基于检测出的RoI尺寸将之分布在各个FPN层级中,并随后利用RoIAlign进行特征对准。其实现机制可具体表现为如下的映射关系:

在这里插入图片描述

但是上述公式并不完全适合不同尺寸输入的图片, 为了改善这一问题, 可以尝试将RoI映射关系采用以下的形式进行重新定义

在这里插入图片描述

2-2 Spatial Attention-Guided Mask (空间注意力引导的Mask分割)

空间注意力特征(spatial attention map)帮助进行目标聚焦与抑制噪声。

在这里插入图片描述

PyTorch代码:

复制代码
    class SAG_Mask(nn.Module):
    def __init__(self, in_channels, out_channels):
        super(SAG_Mask, self).__init__()
        mid_channels = in_channels
    
        self.fisrt_convs = nn.Sequential(
            Conv3x3BNReLU(in_channels=in_channels, out_channels=mid_channels, stride=1),
            Conv3x3BNReLU(in_channels=mid_channels, out_channels=mid_channels, stride=1),
            Conv3x3BNReLU(in_channels=mid_channels, out_channels=mid_channels, stride=1),
            Conv3x3BNReLU(in_channels=mid_channels, out_channels=mid_channels, stride=1)
        )
    
        self.avg_pool = nn.AvgPool2d(kernel_size=3, stride=1, padding=1)
        self.max_pool = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
    
        self.conv3x3 = Conv3x3BNReLU(in_channels=mid_channels*2, out_channels=mid_channels, stride=1)
        self.sigmoid = nn.Sigmoid()
    
        self.deconv = nn.ConvTranspose2d(mid_channels,mid_channels,kernel_size=2, stride=2)
        self.conv1x1 = Conv1x1BN(mid_channels,out_channels)
    
    def forward(self, x):
        residual =  x = self.fisrt_convs(x)
        aggregate = torch.cat([self.avg_pool(x), self.max_pool(x)], dim=1)
        sag = self.sigmoid(self.conv3x3(aggregate))
        sag_x = residual + sag * x
        out = self.conv1x1(self.deconv(sag_x))
        return out
    
    if __name__=='__main__':
    sag_mask = SAG_Mask(16,80)
    print(sag_mask)
    input = torch.randn(1, 16, 14, 14)
    out = sag_mask(input)
    print(out.shape)
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    
    代码解读

2-3 VoVNetV2

对VoVNet进行了有效的优化工作:第一点是借助残差连接机制缓解了较大规模VoVNet出现的饱和现象;第二点则采用新型Squeeze-Excitation(eSE)结构来弥补传统SE模块的信息丢失问题。基于相同的ResNet101-FPN主干网络架构,在实验中所提出方法实现了高达38.3% mask AP的平均精度。与现有所有模型相比不仅性能更优而且运行效率更高

在这里插入图片描述

通过引入输入到输出之间的残差连接机制,在深层网络叠加过程中有效地解决了性能饱和与梯度消失的问题;
在输出层引入了一个通道级注意力机制eSE,并替换了两个全连接层以减少模型参数数量。

PyTorch代码:

复制代码
    class eSE_Module(nn.Module):
    def __init__(self, channel,ratio = 16):
        super(eSE_Module, self).__init__()
        self.squeeze = nn.AdaptiveAvgPool2d(1)
        self.excitation = nn.Sequential(
            nn.Conv2d(channel, channel, kernel_size=1, padding=0),
            nn.ReLU(inplace=True),
            nn.Sigmoid()
            )
    def forward(self, x):
        b, c, _, _ = x.size()
        y = self.squeeze(x)
        z = self.excitation(y)
        return x * z.expand_as(x)
    
    class OSAv2_module(nn.Module):
    def __init__(self, in_channels,mid_channels, out_channels, block_nums=5):
        super(OSAv2_module, self).__init__()
    
        self._layers = nn.ModuleList()
        self._layers.append(Conv3x3BNReLU(in_channels=in_channels, out_channels=mid_channels, stride=1))
        for idx in range(block_nums-1):
            self._layers.append(Conv3x3BNReLU(in_channels=mid_channels, out_channels=mid_channels, stride=1))
    
    
        self.conv1x1 = Conv1x1BNReLU(in_channels+mid_channels*block_nums,out_channels)
        self.ese = eSE_Module(out_channels)
        self.pass_conv1x1 = Conv1x1BNReLU(in_channels, out_channels)
    
    def forward(self, x):
        residual = x
        outputs = []
        outputs.append(x)
        for _layer in self._layers:
            x = _layer(x)
            outputs.append(x)
        out = self.ese(self.conv1x1(torch.cat(outputs, dim=1)))
        return out + self.pass_conv1x1(residual)
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    
    代码解读

4 实验结果

在这里插入图片描述

全部评论 (0)

还没有任何评论哟~