Advertisement

深度学习论文: RepVGG: Making VGG-style ConvNets Great Again及其PyTorch实现

阅读量:

深度学习论文: RepVGG: Rebuilding VGG-style convolutional neural networks to achieve robust performance and its PyTorch implementation. Available as a PDF document on arXiv at https://arxiv.org/pdf/2101.03697.pdf. Additionally, another implementation of the enhanced VGG-style architecture is provided via PyTorch, accessible at https://github.com/shanglianlm0525/PyTorch-Networks, alongside another variation available via PyTorch at https://github.com/shanglianlm0525/CvPytorch.

1 概述

现阶段的一些网络结构设计可以提升模型精度,但是过于复杂的结构又会影响推理速度。
如: a. 多分支结构设计单独 残差网络add,Inception系列中的concat操作,增加显存消耗和推理时间
b. 轻量化网络设计的DepthwiseConv和ShuffleNet中的channel shuffle。这些操作FLOPS较低,但是
会提高访存消耗,实际推理速度也较慢。

2 RepVGG

2-1 VGG样式的网络优点

运行速度极快

在这里插入图片描述

如VGG16的FLOPS比EfficientNetB3大8倍,但是最终运行速度RepVGG要快1.8倍

多分支结构对内存资源的占用非常大;各个子路径的计算结果必须被暂时存储下来;只有在最终整合阶段才会释放这些显存占用。

在这里插入图片描述

灵活

2-2 RepVGG Block

repVGG Block可以表示为

Out = F(X) + G(X) + X , 其中F(X) 表示3 x 3 卷积 G(X)表示 1x1 卷积

其中 3 x 3 卷积和 1x1 卷积对性能都有一定的提升。

在这里插入图片描述

将一个RepVGG模块转换为卷积操作相对简单。这是因为RepVGG模块中的1×1卷积实际上相当于一种特殊的3×3卷积(其滤波器内核包含大量零),而恒等映射对应于一种特殊的1×1卷积(基于单位矩阵的设计),因此这也是一种特殊的3×3卷积!为了实现这一过程:第一步我们需要将identity操作转化为基于单位矩阵的设计;第二步则是通过填充零的方式将现有的1×2或2×2大小的操作转化为对应的较大尺寸的操作。

在这里插入图片描述

3 Architectural

在这里插入图片描述

开发了A、B两类模型;它们的主要差异在于卷积层的堆叠配置;通过参数a和b调节通道数量

在这里插入图片描述

PyTorh代码:

复制代码
    # !/usr/bin/env python
    # -- coding: utf-8 --
    # @Time : 2021/2/25 15:45
    # @Author : liumin
    # @File : repVGGNet.py
    
    import numpy as np
    import torch
    import torch.nn as nn
    
    
    def Conv1x1BN(in_channels,out_channels, stride=1, groups=1, bias=False):
    return nn.Sequential(
            nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, padding=0, groups=groups, bias=bias),
            nn.BatchNorm2d(out_channels)
        )
    
    def Conv3x3BN(in_channels,out_channels, stride=1, groups=1, bias=False):
    return nn.Sequential(
        nn.Conv2d(in_channels=in_channels,out_channels=out_channels,kernel_size=3,stride=stride,padding=1, groups=groups, bias=bias),
        nn.BatchNorm2d(out_channels)
    )
    
    
    class RepVGGBlock(nn.Module):
    def __init__(self, in_channels, out_channels, stride=1, groups=1, deploy=False):
        super(RepVGGBlock, self).__init__()
        self.deploy = deploy
    
        if self.deploy:
            self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
                                         kernel_size=3, stride=stride, padding=1, dilation=1, groups=groups, bias=True)
        else:
            self.conv1 = Conv3x3BN(in_channels, out_channels, stride=stride, groups=groups, bias=False)
            self.conv2 = Conv1x1BN(in_channels, out_channels, stride=stride, groups=groups, bias=False)
    
            self.identity = nn.BatchNorm2d(in_channels) if out_channels == in_channels and stride == 1 else None
    
        self.act = nn.ReLU(inplace=True)
    
    def forward(self, x):
        if self.deploy:
            return self.act(self.conv(x))
        if self.identity is None:
            return self.act(self.conv1(x) + self.conv2(x))
        else:
            return self.act(self.conv1(x) + self.conv2(x) + self.identity(x))
    
    
    class RepVGG(nn.Module):
    def __init__(self, block_nums, width_multiplier=None, group=1, num_classes=1000, deploy=False):
        super(RepVGG, self).__init__()
        self.deploy = deploy
        self.group = group
        assert len(width_multiplier) == 4
    
        self.stage0 = RepVGGBlock(in_channels=3,out_channels=min(64, int(64 * width_multiplier[0])), stride=2, deploy=self.deploy)
        self.cur_layer_idx = 1
        self.stage1 = self._make_layers(in_channels=min(64, int(64 * width_multiplier[0])), out_channels= int(64 * width_multiplier[0]), stride=2, block_num=block_nums[0])
        self.stage2 = self._make_layers(in_channels=int(64 * width_multiplier[0]), out_channels=int(128 * width_multiplier[1]), stride=2, block_num=block_nums[1])
        self.stage3 = self._make_layers(in_channels=int(128 * width_multiplier[1]), out_channels=int(256 * width_multiplier[2]), stride=2, block_num=block_nums[2])
        self.stage4 = self._make_layers(in_channels=int(256 * width_multiplier[2]), out_channels=int(512 * width_multiplier[3]), stride=2, block_num=block_nums[3])
        self.avg_pool = nn.AdaptiveAvgPool2d(output_size=1)
        self.linear = nn.Linear(int(512 * width_multiplier[3]), num_classes)
    
        self._init_params()
    
    def _make_layers(self, in_channels, out_channels, stride, block_num):
        layers = []
        layers.append(RepVGGBlock(in_channels,out_channels, stride=stride, groups=self.group if self.cur_layer_idx%2==0 else 1, deploy=self.deploy))
        self.cur_layer_idx += 1
        for i in range(block_num):
            layers.append(RepVGGBlock(out_channels,out_channels, stride=1, groups=self.group if self.cur_layer_idx%2==0 else 1, deploy=self.deploy))
            self.cur_layer_idx += 1
        return nn.Sequential(*layers)
    
    def _init_params(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
    
    def forward(self, x):
        x = self.stage0(x)
        x = self.stage1(x)
        x = self.stage2(x)
        x = self.stage3(x)
        x = self.stage4(x)
        x = self.avg_pool(x)
        x = x.view(x.size(0), -1)
        out = self.linear(x)
        return out
    
    def RepVGG_A0(deploy=False):
    return RepVGG(block_nums=[2, 4, 14, 1], num_classes=1000,
                  width_multiplier=[0.75, 0.75, 0.75, 2.5], group=1, deploy=deploy)
    
    def RepVGG_A1(deploy=False):
    return RepVGG(block_nums=[2, 4, 14, 1], num_classes=1000,
                  width_multiplier=[1, 1, 1, 2.5], group=1, deploy=deploy)
    
    def RepVGG_A2(deploy=False):
    return RepVGG(block_nums=[2, 4, 14, 1], num_classes=1000,
                  width_multiplier=[1.5, 1.5, 1.5, 2.75], group=1, deploy=deploy)
    
    def RepVGG_B0(deploy=False):
    return RepVGG(block_nums=[4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[1, 1, 1, 2.5], group=1, deploy=deploy)
    
    def RepVGG_B1(deploy=False):
    return RepVGG(block_nums=[4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[2, 2, 2, 4], group=1, deploy=deploy)
    
    def RepVGG_B1g2(deploy=False):
    return RepVGG(block_nums=[4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[2, 2, 2, 4], group=2, deploy=deploy)
    
    def RepVGG_B1g4(deploy=False):
    return RepVGG(block_nums=[4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[2, 2, 2, 4], group=4, deploy=deploy)
    
    
    def RepVGG_B2(deploy=False):
    return RepVGG(block_nums=[4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[2.5, 2.5, 2.5, 5], group=1, deploy=deploy)
    
    def RepVGG_B2g2(deploy=False):
    return RepVGG(block_nums=[4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[2.5, 2.5, 2.5, 5], group=2, deploy=deploy)
    
    def RepVGG_B2g4(deploy=False):
    return RepVGG(block_nums=[4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[2.5, 2.5, 2.5, 5], group=4, deploy=deploy)
    
    
    def RepVGG_B3(deploy=False):
    return RepVGG(block_nums=[1, 4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[3, 3, 3, 5], group=1, deploy=deploy)
    
    def RepVGG_B3g2(deploy=False):
    return RepVGG(block_nums=[1, 4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[3, 3, 3, 5], group=2, deploy=deploy)
    
    def RepVGG_B3g4(deploy=False):
    return RepVGG(block_nums=[1, 4, 6, 16, 1], num_classes=1000,
                  width_multiplier=[3, 3, 3, 5], group=4, deploy=deploy)
    
    
    if __name__ == '__main__':
    model = RepVGG_A1()
    print(model)
    
    input = torch.randn(1,3,224,224)
    out = model(input)
    print(out.shape)
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    
    代码解读

全部评论 (0)

还没有任何评论哟~