Advertisement

PaddlePaddle飞桨论文复现营——3D Residual Networks for Action Recognition学习笔记

阅读量:

《深度学习框架》中3D残差网络用于动作识别的学习笔记

1 背景知识

1.1 C3D

该方法采用三维卷积技术来提取视频数据特征,并通过在水平、垂直以及时间维度上进行分析处理实现对视频内容的全面解析。相较于二维卷积方法而言,其捕捉到的空间-时间特征更加丰富且具有更强的时间分辨率。值得注意的是,在引入时间维度后会导致计算复杂度显著提升

在这里插入图片描述

P.S: C3D源代码和预训练模型可参考:http://vlg.cs.dartmouth.edu/c3d/

在这里插入图片描述

从上图可以看出,在比较经典的2D卷积网络VGG16和VGG19的基础上,C3D模型在卷积层的数量上有显著减少,仅包含8层卷积层.值得注意的是,在训练数据不足的情况下,该模型容易出现过拟合现象.

1.2 Kinetics数据集

DeepMind团队于2017年5月22日发布了套极具影响力的视频分类数据集

1.3 ResNet

更深的神经网络结构往往伴随着更高的训练难度。
因此,在《Deep Residual Learning for Image Recognition》一文中,《微软研究院》何恺明等人提出了残差学习框架ResNet。
这种网络相较于以往采用的其他架构方案,在ResNet的设计中采用了层次结构更加分明的方式。
其核心思想在于将每一层的学习过程分解为直接传递前一层输出并同时学习剩余映射两个部分。
经过精心设计后,在ImageNet测试集上实现了3.57%的分类误差率。
在经过精心设计后,在ImageNet测试集上实现了3.57%的分类误差率。

在这里插入图片描述

下图是ResNet在CIFAR-10测试集上的分类错误情况

在这里插入图片描述

2 研究方法

2.1 3D ResNets简述

基于当前的研究环境下,在这一研究背景下,
最新的大型视频数据集(例如Kinetics),尽管能够有效地减少过拟合现象的发生率,
然而相对于优秀的二维网络(例如ResNet)而言,
其三维架构C3D在深度上存在不足。
研究团队因此基于ResNet提出了一种名为3D ResNets的新网络架构

2.2 3D ResNet网络架构

2.2.1 Residual block——残差块
在这里插入图片描述

The ResNet architecture incorporated shortcut connections, enabling bypassing adjacent layers during the forward pass of a neural network. These jumps facilitated backward signal transmission from deeper layers to shallower ones, thereby simplifying the training process of extremely deep networks (see Figure 1). As depicted in Figure 1, residual blocks form the fundamental building block of ResNet networks, with each block designed to directly connect to its immediate successor without traversing intermediate mappings. The inclusion of these shortcut connections significantly enhanced the model's expressiveness and learning capacity while maintaining computational efficiency through streamlined information flow between modules.

2.2.2 Network Architecture——网络架构

在这里插入图片描述

通过表1可以看出,在卷积核和池化层方面3D ResNets相较于原始ResNets具有更高的维度。网络架构中特别值得注意的是,在每一步操作中都采用了三维卷积和三维池化技术(如图1所示)。其中,在卷积核尺寸上采用"3 × 3 × 3"的标准配置,并且第一层的空间步长设置为1(stride=1),这一设计与C3D模型高度相似。为了满足视频序列的输入需求,在本网络中选取了连续的16帧RGB图像作为输入素材(input clip size)。这些输入图像的空间尺寸被设定为"3 × 16 × 112 × 112"(即高度为112px、宽度为112px、深度为前三帧组成的通道数)。在构建残差块时参考了表中的相关参数设置(括号内的内容)。每完成一次卷积操作后都会依次接入Batch Normalization(BN)层以及激活函数ReLU进行激活处理(activation function)。为了实现有效的下采样效果,在网络结构中采用了一系列空间步长为2的三维卷积操作(如conv3_1, conv4_1, conv5_1)。当模型在训练过程中遇到特征图数量逐渐增多的情况时,默认会采用零填充的方式以避免增加不必要的参数量。整个网络架构的最后一层被特别设计用于处理Kinetics数据集中的400个类别分类任务,并采用了softmax函数作为输出层(将输出值限定在0~之间的一个概率值)

2.2.3 Training——训练

该研究者采用了带动量的随机梯度下降法(SGD)来训练基于3D ResNet架构的深度学习模型,并通过从训练数据集中随机生成视频样本来进行数据增强操作以提升模型性能

通过等间距取样获取每个样本的时间位置(temporal positions)。
在选定的时间位置(temporal positions)周围实施多层次裁剪操作;若视频总帧数不足16帧,则需对该视频实施必要次数的循环处理。
从四个角点或中心区域随机选取空间位置。
对每个样本的空间尺度进行多层次裁剪操作;所选的比例包括\left\{1, \frac{1}{2^{0.25}}, \frac{1}{\sqrt{2}}, \frac{1}{2^{0.25}}, \frac{1}{2}\right\}(其中数值越大表示越小),最大的缩放比例对应着最大的裁剪范围;并且所有裁剪框均为正方形(长宽比为一)。在此过程中水平翻转的概率设置为50%。
对每个样本执行归均运算;所有生成的样本与原始视频共享相同的分类标签信息。

在训练过程中,在训练初期将学习率设置为lr= 5e-2;随后将lr降至5e-4后观察到验证损失趋于稳定;较大的lr与合理的batch大小对于提升模型的卓越表现具有关键作用。

2.2.4 Recognition——识别

在识别视频动作的过程中,在训练阶段会将每个视频划分为多个互不重叠的16帧片段,并对这些片段进行处理。这些片段会被均通过最大缩放比例围绕中心区域进行裁剪,并通过 trained model推断出各片段所属类别及其概率分布,并计算所有片段的概率均值来辅助识别整个视频的动作信息。

2.2.5 Dataset——数据集

此实验中,作者使用了ActivityNet(v1.3) Kinetics数据集。

ActivityNet数据集包含了200个不同的人类动作类型作为样本。每个动作类型平均包含137个未经剪辑的视频片段。每个视频中约有1.41个活动实例。所有视频的总持续时长为849小时。总计约有28,108个活动实例被记录下来。该数据集被随机划分为三个互斥的部分:训练集占50%,验证集和测试集各占25%。

在2017年发布之际

对于这两个数据集的研究者而言,在处理视频时都将其高度设置为360像素,并未修改其比例。

3 研究成果

3.1 基于ActivityNet数据集的初步实验结果

在这里插入图片描述

本实验旨在通过规模较小的数据集来考察3D ResNet的训练效果。在本实验中,研究团队利用Table 1所述数据构建了包含18层3D ResNet模型,并基于Sports-1M进行了C3D预训练。通过观察Figure 2可以看出,在使用18层设计时出现了过拟合现象,这导致其验证集精度显著低于训练集精度。相比之下,C3D经过Sports-1M预训练后并未出现过拟合问题,并且获得了更好的识别性能。

3.2 基于Kinetics数据集的实验结果

在这里插入图片描述

在本实验中,研究团队采用了包含^{[}^{[}^{[}^{[}^{[} 34层 ^{]}^{]}^{]}^{]}^{]} 的三维ResNet架构而非 18层版本,这一选择基于 Kinetics 数据集中样本数量显著多于ActivityNet的事实. 如图所示,采用^{[}^{[}^{[}^{[}^{[} 34层架构不会导致过拟合并能实现良好性能. 参考图1(b)可知, Sports-1M预训练的数据下得到较高的验证准确率.

在这里插入图片描述

表2呈现了34层三维ResNet与同期其他先进技术在准确度方面的对比。在精度上,34层三维ResNet优于基于Sports-1M预训练的C3D模型,并采用批量归一化技术(BN)进行端到端训练。这一结果验证了三维ResNet的有效性。然而,在实验中发现深度较浅(仅包含16层)但性能最佳的RGB-I3D模型在实际应用中表现出色。其主要原因可能在于,在训练RGB-I3D时采用了更多GPU(共使用了64个),而仅用较少GPU资源(仅4个)进行训练的三维ResNet架构尺寸受限(具体尺寸为"1 × 64 × 112 × 112")。相比之下,在相同的实验条件下RGB-I3D采用了更大的尺寸设定("1 × 96 × 224 × 224")。值得注意的是,在保证识别精度的前提下提高空间分辨率以及延长视频序列的时间长度可能会进一步优化三维ResNets的表现

4 Conclusion——结论

该研究团队巧妙地设计了基于三维卷积核与三维池化层的神经网络架构,并通过一系列实验进行验证,在视频分类任务中取得了显著效果(尤其是在大规模数据集上)[3]

该研究团队巧妙地设计了基于三维卷积核与三维池化层的神经网络架构,并通过一系列实验进行验证,在视频分类任务中取得了显著效果(尤其是针对大规模数据集的应用)[3]

5 源码简析

源码参考地址:https://github.com/kenshohara/3D-ResNets

5.1 training.py

复制代码
    import torch	# 通过paddlepaddle实现时,此处需修改为对应的paddlepaddle包
    import time
    import os
    import sys
    
    import torch	# 通过paddlepaddle实现时,此处需修改为对应的paddlepaddle包
    import torch.distributed as dist	# 通过paddlepaddle实现时,此处需修改为对应的paddlepaddle包
    
    from utils import AverageMeter, calculate_accuracy
    
    
    def train_epoch(epoch,	# 训练轮次
                data_loader,
                model,
                criterion,
                optimizer,
                device,
                current_lr,
                epoch_logger,
                batch_logger,
                tb_writer=None,
                distributed=False):
    print('train at epoch {}'.format(epoch))
    
    model.train()
    
    batch_time = AverageMeter()
    data_time = AverageMeter()
    losses = AverageMeter()
    accuracies = AverageMeter()
    
    end_time = time.time()
    for i, (inputs, targets) in enumerate(data_loader):
        data_time.update(time.time() - end_time)
    
        targets = targets.to(device, non_blocking=True)
        outputs = model(inputs)
        loss = criterion(outputs, targets)	# 损失值计算
        acc = calculate_accuracy(outputs, targets)	# 准确率计算
    
        losses.update(loss.item(), inputs.size(0))	# 损失值更新
        accuracies.update(acc, inputs.size(0))	# 准确率更新
    
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    
        batch_time.update(time.time() - end_time)
        end_time = time.time()
    
        if batch_logger is not None:
            batch_logger.log({
                'epoch': epoch,
                'batch': i + 1,
                'iter': (epoch - 1) * len(data_loader) + (i + 1),
                'loss': losses.val,
                'acc': accuracies.val,
                'lr': current_lr
            })
    
        print('Epoch: [{0}][{1}/{2}]\t'								# 打印运行日志
              'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
              'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
              'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
              'Acc {acc.val:.3f} ({acc.avg:.3f})'.format(epoch,
                                                         i + 1,
                                                         len(data_loader),
                                                         batch_time=batch_time,
                                                         data_time=data_time,
                                                         loss=losses,
                                                         acc=accuracies))
    
    if distributed:
        loss_sum = torch.tensor([losses.sum],
                                dtype=torch.float32,
                                device=device)
        loss_count = torch.tensor([losses.count],
                                  dtype=torch.float32,
                                  device=device)
        acc_sum = torch.tensor([accuracies.sum],
                               dtype=torch.float32,
                               device=device)
        acc_count = torch.tensor([accuracies.count],
                                 dtype=torch.float32,
                                 device=device)
    
        dist.all_reduce(loss_sum, op=dist.ReduceOp.SUM)
        dist.all_reduce(loss_count, op=dist.ReduceOp.SUM)
        dist.all_reduce(acc_sum, op=dist.ReduceOp.SUM)
        dist.all_reduce(acc_count, op=dist.ReduceOp.SUM)
    
        losses.avg = loss_sum.item() / loss_count.item()
        accuracies.avg = acc_sum.item() / acc_count.item()
    
    if epoch_logger is not None:
        epoch_logger.log({
            'epoch': epoch,
            'loss': losses.avg,
            'acc': accuracies.avg,
            'lr': current_lr
        })
    
    if tb_writer is not None:
        tb_writer.add_scalar('train/loss', losses.avg, epoch)
        tb_writer.add_scalar('train/acc', accuracies.avg, epoch)
        tb_writer.add_scalar('train/lr', accuracies.avg, epoch)

5.2 main.py

复制代码
    from pathlib import Path
    import json
    import random
    import os
    
    import numpy as np
    import torch
    from torch.nn import CrossEntropyLoss
    from torch.optim import SGD, lr_scheduler
    import torch.multiprocessing as mp
    import torch.distributed as dist
    from torch.backends import cudnn
    import torchvision
    
    from opts import parse_opts
    from model import (generate_model, load_pretrained_model, make_data_parallel,
                   get_fine_tuning_parameters)
    from mean import get_mean_std
    from spatial_transforms import (Compose, Normalize, Resize, CenterCrop,
                                CornerCrop, MultiScaleCornerCrop,
                                RandomResizedCrop, RandomHorizontalFlip,
                                ToTensor, ScaleValue, ColorJitter,
                                PickFirstChannels)
    from temporal_transforms import (LoopPadding, TemporalRandomCrop,
                                 TemporalCenterCrop, TemporalEvenCrop,
                                 SlidingWindow, TemporalSubsampling)
    from temporal_transforms import Compose as TemporalCompose
    from dataset import get_training_data, get_validation_data, get_inference_data
    from utils import Logger, worker_init_fn, get_lr
    from training import train_epoch
    from validation import val_epoch
    import inference
    
    
    def json_serial(obj):
    if isinstance(obj, Path):
        return str(obj)
    
    
    def get_opt():
    opt = parse_opts()
    
    if opt.root_path is not None:
        opt.video_path = opt.root_path / opt.video_path
        opt.annotation_path = opt.root_path / opt.annotation_path
        opt.result_path = opt.root_path / opt.result_path
        if opt.resume_path is not None:
            opt.resume_path = opt.root_path / opt.resume_path
        if opt.pretrain_path is not None:
            opt.pretrain_path = opt.root_path / opt.pretrain_path
    
    if opt.pretrain_path is not None:
        opt.n_finetune_classes = opt.n_classes
        opt.n_classes = opt.n_pretrain_classes
    
    if opt.output_topk <= 0:
        opt.output_topk = opt.n_classes
    
    if opt.inference_batch_size == 0:
        opt.inference_batch_size = opt.batch_size
    
    opt.arch = '{}-{}'.format(opt.model, opt.model_depth)
    opt.begin_epoch = 1
    opt.mean, opt.std = get_mean_std(opt.value_scale, dataset=opt.mean_dataset)
    opt.n_input_channels = 3
    if opt.input_type == 'flow':
        opt.n_input_channels = 2
        opt.mean = opt.mean[:2]
        opt.std = opt.std[:2]
    
    if opt.distributed:
        opt.dist_rank = int(os.environ["OMPI_COMM_WORLD_RANK"])
    
        if opt.dist_rank == 0:
            print(opt)
            with (opt.result_path / 'opts.json').open('w') as opt_file:
                json.dump(vars(opt), opt_file, default=json_serial)
    else:
        print(opt)
        with (opt.result_path / 'opts.json').open('w') as opt_file:
            json.dump(vars(opt), opt_file, default=json_serial)
    
    return opt
    
    # 恢复已保存的模型
    def resume_model(resume_path, arch, model):
    print('loading checkpoint {} model'.format(resume_path))
    checkpoint = torch.load(resume_path, map_location='cpu')
    assert arch == checkpoint['arch']
    
    if hasattr(model, 'module'):
        model.module.load_state_dict(checkpoint['state_dict'])
    else:
        model.load_state_dict(checkpoint['state_dict'])
    
    return model
    
    
    def resume_train_utils(resume_path, begin_epoch, optimizer, scheduler):
    print('loading checkpoint {} train utils'.format(resume_path))
    checkpoint = torch.load(resume_path, map_location='cpu')
    
    begin_epoch = checkpoint['epoch'] + 1
    if optimizer is not None and 'optimizer' in checkpoint:
        optimizer.load_state_dict(checkpoint['optimizer'])
    if scheduler is not None and 'scheduler' in checkpoint:
        scheduler.load_state_dict(checkpoint['scheduler'])
    
    return begin_epoch, optimizer, scheduler
    
    # 数据标准化、归一化方法
    def get_normalize_method(mean, std, no_mean_norm, no_std_norm):
    if no_mean_norm:
        if no_std_norm:
            return Normalize([0, 0, 0], [1, 1, 1])
        else:
            return Normalize([0, 0, 0], std)
    else:
        if no_std_norm:
            return Normalize(mean, [1, 1, 1])
        else:
            return Normalize(mean, std)
    
    
    def get_train_utils(opt, model_parameters):
    assert opt.train_crop in ['random', 'corner', 'center']
    spatial_transform = []
    if opt.train_crop == 'random':
        spatial_transform.append(
            RandomResizedCrop(
                opt.sample_size, (opt.train_crop_min_scale, 1.0),
                (opt.train_crop_min_ratio, 1.0 / opt.train_crop_min_ratio)))
    elif opt.train_crop == 'corner':
        scales = [1.0]
        scale_step = 1 / (2**(1 / 4))
        for _ in range(1, 5):
            scales.append(scales[-1] * scale_step)
        spatial_transform.append(MultiScaleCornerCrop(opt.sample_size, scales))
    elif opt.train_crop == 'center':
        spatial_transform.append(Resize(opt.sample_size))
        spatial_transform.append(CenterCrop(opt.sample_size))
    normalize = get_normalize_method(opt.mean, opt.std, opt.no_mean_norm,
                                     opt.no_std_norm)
    if not opt.no_hflip:
        spatial_transform.append(RandomHorizontalFlip())
    if opt.colorjitter:
        spatial_transform.append(ColorJitter())
    spatial_transform.append(ToTensor())
    if opt.input_type == 'flow':
        spatial_transform.append(PickFirstChannels(n=2))
    spatial_transform.append(ScaleValue(opt.value_scale))
    spatial_transform.append(normalize)
    spatial_transform = Compose(spatial_transform)
    
    assert opt.train_t_crop in ['random', 'center']
    temporal_transform = []
    if opt.sample_t_stride > 1:
        temporal_transform.append(TemporalSubsampling(opt.sample_t_stride))
    if opt.train_t_crop == 'random':
        temporal_transform.append(TemporalRandomCrop(opt.sample_duration))
    elif opt.train_t_crop == 'center':
        temporal_transform.append(TemporalCenterCrop(opt.sample_duration))
    temporal_transform = TemporalCompose(temporal_transform)
    
    train_data = get_training_data(opt.video_path, opt.annotation_path,
                                   opt.dataset, opt.input_type, opt.file_type,
                                   spatial_transform, temporal_transform)
    if opt.distributed:
        train_sampler = torch.utils.data.distributed.DistributedSampler(
            train_data)
    else:
        train_sampler = None
    train_loader = torch.utils.data.DataLoader(train_data,
                                               batch_size=opt.batch_size,
                                               shuffle=(train_sampler is None),
                                               num_workers=opt.n_threads,
                                               pin_memory=True,
                                               sampler=train_sampler,
                                               worker_init_fn=worker_init_fn)
    
    if opt.is_master_node:
        train_logger = Logger(opt.result_path / 'train.log',
                              ['epoch', 'loss', 'acc', 'lr'])
        train_batch_logger = Logger(
            opt.result_path / 'train_batch.log',
            ['epoch', 'batch', 'iter', 'loss', 'acc', 'lr'])
    else:
        train_logger = None
        train_batch_logger = None
    
    if opt.nesterov:
        dampening = 0
    else:
        dampening = opt.dampening
    optimizer = SGD(model_parameters,
                    lr=opt.learning_rate,
                    momentum=opt.momentum,
                    dampening=dampening,
                    weight_decay=opt.weight_decay,
                    nesterov=opt.nesterov)
    
    assert opt.lr_scheduler in ['plateau', 'multistep']
    assert not (opt.lr_scheduler == 'plateau' and opt.no_val)
    if opt.lr_scheduler == 'plateau':
        scheduler = lr_scheduler.ReduceLROnPlateau(
            optimizer, 'min', patience=opt.plateau_patience)
    else:
        scheduler = lr_scheduler.MultiStepLR(optimizer,
                                             opt.multistep_milestones)
    
    return (train_loader, train_sampler, train_logger, train_batch_logger,
            optimizer, scheduler)
    
    
    def get_val_utils(opt):
    normalize = get_normalize_method(opt.mean, opt.std, opt.no_mean_norm,
                                     opt.no_std_norm)
    spatial_transform = [
        Resize(opt.sample_size),
        CenterCrop(opt.sample_size),
        ToTensor()
    ]
    if opt.input_type == 'flow':
        spatial_transform.append(PickFirstChannels(n=2))
    spatial_transform.extend([ScaleValue(opt.value_scale), normalize])
    spatial_transform = Compose(spatial_transform)
    
    temporal_transform = []
    if opt.sample_t_stride > 1:
        temporal_transform.append(TemporalSubsampling(opt.sample_t_stride))
    temporal_transform.append(
        TemporalEvenCrop(opt.sample_duration, opt.n_val_samples))
    temporal_transform = TemporalCompose(temporal_transform)
    
    val_data, collate_fn = get_validation_data(opt.video_path,
                                               opt.annotation_path, opt.dataset,
                                               opt.input_type, opt.file_type,
                                               spatial_transform,
                                               temporal_transform)
    if opt.distributed:
        val_sampler = torch.utils.data.distributed.DistributedSampler(
            val_data, shuffle=False)
    else:
        val_sampler = None
    val_loader = torch.utils.data.DataLoader(val_data,
                                             batch_size=(opt.batch_size //
                                                         opt.n_val_samples),
                                             shuffle=False,
                                             num_workers=opt.n_threads,
                                             pin_memory=True,
                                             sampler=val_sampler,
                                             worker_init_fn=worker_init_fn,
                                             collate_fn=collate_fn)
    
    if opt.is_master_node:
        val_logger = Logger(opt.result_path / 'val.log',
                            ['epoch', 'loss', 'acc'])
    else:
        val_logger = None
    
    return val_loader, val_logger
    
    
    def get_inference_utils(opt):
    assert opt.inference_crop in ['center', 'nocrop']
    
    normalize = get_normalize_method(opt.mean, opt.std, opt.no_mean_norm,
                                     opt.no_std_norm)
    
    spatial_transform = [Resize(opt.sample_size)]
    if opt.inference_crop == 'center':
        spatial_transform.append(CenterCrop(opt.sample_size))
    spatial_transform.append(ToTensor())
    if opt.input_type == 'flow':
        spatial_transform.append(PickFirstChannels(n=2))
    spatial_transform.extend([ScaleValue(opt.value_scale), normalize])
    spatial_transform = Compose(spatial_transform)
    
    temporal_transform = []
    if opt.sample_t_stride > 1:
        temporal_transform.append(TemporalSubsampling(opt.sample_t_stride))
    temporal_transform.append(
        SlidingWindow(opt.sample_duration, opt.inference_stride))
    temporal_transform = TemporalCompose(temporal_transform)
    
    inference_data, collate_fn = get_inference_data(
        opt.video_path, opt.annotation_path, opt.dataset, opt.input_type,
        opt.file_type, opt.inference_subset, spatial_transform,
        temporal_transform)
    
    inference_loader = torch.utils.data.DataLoader(
        inference_data,
        batch_size=opt.inference_batch_size,
        shuffle=False,
        num_workers=opt.n_threads,
        pin_memory=True,
        worker_init_fn=worker_init_fn,
        collate_fn=collate_fn)
    
    return inference_loader, inference_data.class_names
    
    
    def save_checkpoint(save_file_path, epoch, arch, model, optimizer, scheduler):
    if hasattr(model, 'module'):
        model_state_dict = model.module.state_dict()
    else:
        model_state_dict = model.state_dict()
    save_states = {
        'epoch': epoch,
        'arch': arch,
        'state_dict': model_state_dict,
        'optimizer': optimizer.state_dict(),
        'scheduler': scheduler.state_dict()
    }
    torch.save(save_states, save_file_path)
    
    
    def main_worker(index, opt):
    random.seed(opt.manual_seed)
    np.random.seed(opt.manual_seed)
    torch.manual_seed(opt.manual_seed)
    
    if index >= 0 and opt.device.type == 'cuda':
        opt.device = torch.device(f'cuda:{index}')
    
    if opt.distributed:
        opt.dist_rank = opt.dist_rank * opt.ngpus_per_node + index
        dist.init_process_group(backend='nccl',
                                init_method=opt.dist_url,
                                world_size=opt.world_size,
                                rank=opt.dist_rank)
        opt.batch_size = int(opt.batch_size / opt.ngpus_per_node)
        opt.n_threads = int(
            (opt.n_threads + opt.ngpus_per_node - 1) / opt.ngpus_per_node)
    opt.is_master_node = not opt.distributed or opt.dist_rank == 0
    
    model = generate_model(opt) # 调用模型
    if opt.batchnorm_sync:
        assert opt.distributed, 'SyncBatchNorm only supports DistributedDataParallel.'
        model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
    if opt.pretrain_path:
        model = load_pretrained_model(model, opt.pretrain_path, opt.model,
                                      opt.n_finetune_classes)
    if opt.resume_path is not None:
        model = resume_model(opt.resume_path, opt.arch, model)
    model = make_data_parallel(model, opt.distributed, opt.device)
    
    if opt.pretrain_path:
        parameters = get_fine_tuning_parameters(model, opt.ft_begin_module)
    else:
        parameters = model.parameters()
    
    if opt.is_master_node:
        print(model)
    
    criterion = CrossEntropyLoss().to(opt.device)
    
    if not opt.no_train:
        (train_loader, train_sampler, train_logger, train_batch_logger,
         optimizer, scheduler) = get_train_utils(opt, parameters)
        if opt.resume_path is not None:
            opt.begin_epoch, optimizer, scheduler = resume_train_utils(
                opt.resume_path, opt.begin_epoch, optimizer, scheduler)
            if opt.overwrite_milestones:
                scheduler.milestones = opt.multistep_milestones
    if not opt.no_val:
        val_loader, val_logger = get_val_utils(opt)
    
    if opt.tensorboard and opt.is_master_node:
        from torch.utils.tensorboard import SummaryWriter
        if opt.begin_epoch == 1:
            tb_writer = SummaryWriter(log_dir=opt.result_path)
        else:
            tb_writer = SummaryWriter(log_dir=opt.result_path,
                                      purge_step=opt.begin_epoch)
    else:
        tb_writer = None
    
    prev_val_loss = None
    for i in range(opt.begin_epoch, opt.n_epochs + 1):
        if not opt.no_train:
            if opt.distributed:
                train_sampler.set_epoch(i)
            current_lr = get_lr(optimizer)
            train_epoch(i, train_loader, model, criterion, optimizer,
                        opt.device, current_lr, train_logger,
                        train_batch_logger, tb_writer, opt.distributed)
    
            if i % opt.checkpoint == 0 and opt.is_master_node:
                save_file_path = opt.result_path / 'save_{}.pth'.format(i)
                save_checkpoint(save_file_path, i, opt.arch, model, optimizer,
                                scheduler)
    
        if not opt.no_val:
            prev_val_loss = val_epoch(i, val_loader, model, criterion,
                                      opt.device, val_logger, tb_writer,
                                      opt.distributed)
    
        if not opt.no_train and opt.lr_scheduler == 'multistep':
            scheduler.step()
        elif not opt.no_train and opt.lr_scheduler == 'plateau':
            scheduler.step(prev_val_loss)
    
    if opt.inference:
        inference_loader, inference_class_names = get_inference_utils(opt)
        inference_result_path = opt.result_path / '{}.json'.format(
            opt.inference_subset)
    
        inference.inference(inference_loader, model, inference_result_path,
                            inference_class_names, opt.inference_no_average,
                            opt.output_topk)
    
    
    if __name__ == '__main__':
    opt = get_opt()
    
    opt.device = torch.device('cpu' if opt.no_cuda else 'cuda')
    if not opt.no_cuda:
        cudnn.benchmark = True
    if opt.accimage:
        torchvision.set_image_backend('accimage')
    
    opt.ngpus_per_node = torch.cuda.device_count()
    if opt.distributed:
        opt.world_size = opt.ngpus_per_node * opt.world_size
        mp.spawn(main_worker, nprocs=opt.ngpus_per_node, args=(opt,))
    else:
        main_worker(-1, opt)

5.3 model.py

复制代码
    import torch
    from torch import nn
    
    # 从models目录中导入resnet, resnet2p1d, pre_act_resnet, wide_resnet, resnext, densenet等网络
    from models import resnet, resnet2p1d, pre_act_resnet, wide_resnet, resnext, densenet 
    
    
    def get_module_name(name):
    name = name.split('.')
    if name[0] == 'module':
        i = 1
    else:
        i = 0
    if name[i] == 'features':
        i += 1
    
    return name[i]
    
    
    def get_fine_tuning_parameters(model, ft_begin_module):
    if not ft_begin_module:
        return model.parameters()
    
    parameters = []
    add_flag = False
    for k, v in model.named_parameters():
        if ft_begin_module == get_module_name(k):
            add_flag = True
    
        if add_flag:
            parameters.append({'params': v})
    
    return parameters
    
    # 结合已有网络构造模型
    def generate_model(opt):
    assert opt.model in [
        'resnet', 'resnet2p1d', 'preresnet', 'wideresnet', 'resnext', 'densenet'
    ]
    
    if opt.model == 'resnet':
        model = resnet.generate_model(model_depth=opt.model_depth,
                                      n_classes=opt.n_classes,
                                      n_input_channels=opt.n_input_channels,
                                      shortcut_type=opt.resnet_shortcut,
                                      conv1_t_size=opt.conv1_t_size,
                                      conv1_t_stride=opt.conv1_t_stride,
                                      no_max_pool=opt.no_max_pool,
                                      widen_factor=opt.resnet_widen_factor)
    elif opt.model == 'resnet2p1d':
        model = resnet2p1d.generate_model(model_depth=opt.model_depth,
                                          n_classes=opt.n_classes,
                                          n_input_channels=opt.n_input_channels,
                                          shortcut_type=opt.resnet_shortcut,
                                          conv1_t_size=opt.conv1_t_size,
                                          conv1_t_stride=opt.conv1_t_stride,
                                          no_max_pool=opt.no_max_pool,
                                          widen_factor=opt.resnet_widen_factor)
    elif opt.model == 'wideresnet':
        model = wide_resnet.generate_model(
            model_depth=opt.model_depth,
            k=opt.wide_resnet_k,
            n_classes=opt.n_classes,
            n_input_channels=opt.n_input_channels,
            shortcut_type=opt.resnet_shortcut,
            conv1_t_size=opt.conv1_t_size,
            conv1_t_stride=opt.conv1_t_stride,
            no_max_pool=opt.no_max_pool)
    elif opt.model == 'resnext':
        model = resnext.generate_model(model_depth=opt.model_depth,
                                       cardinality=opt.resnext_cardinality,
                                       n_classes=opt.n_classes,
                                       n_input_channels=opt.n_input_channels,
                                       shortcut_type=opt.resnet_shortcut,
                                       conv1_t_size=opt.conv1_t_size,
                                       conv1_t_stride=opt.conv1_t_stride,
                                       no_max_pool=opt.no_max_pool)
    elif opt.model == 'preresnet':
        model = pre_act_resnet.generate_model(
            model_depth=opt.model_depth,
            n_classes=opt.n_classes,
            n_input_channels=opt.n_input_channels,
            shortcut_type=opt.resnet_shortcut,
            conv1_t_size=opt.conv1_t_size,
            conv1_t_stride=opt.conv1_t_stride,
            no_max_pool=opt.no_max_pool)
    elif opt.model == 'densenet':
        model = densenet.generate_model(model_depth=opt.model_depth,
                                        n_classes=opt.n_classes,
                                        n_input_channels=opt.n_input_channels,
                                        conv1_t_size=opt.conv1_t_size,
                                        conv1_t_stride=opt.conv1_t_stride,
                                        no_max_pool=opt.no_max_pool)
    
    return model
    
    # 载入预训练模型
    def load_pretrained_model(model, pretrain_path, model_name, n_finetune_classes):
    if pretrain_path:
        print('loading pretrained model {}'.format(pretrain_path))
        pretrain = torch.load(pretrain_path, map_location='cpu')
    
        model.load_state_dict(pretrain['state_dict'])
        tmp_model = model
        if model_name == 'densenet':
            tmp_model.classifier = nn.Linear(tmp_model.classifier.in_features,
                                             n_finetune_classes)
        else:
            tmp_model.fc = nn.Linear(tmp_model.fc.in_features,
                                     n_finetune_classes)
    
    return model
    
    
    def make_data_parallel(model, is_distributed, device):
    if is_distributed:
        if device.type == 'cuda' and device.index is not None:
            torch.cuda.set_device(device)
            model.to(device)
    
            model = nn.parallel.DistributedDataParallel(model,
                                                        device_ids=[device])
        else:
            model.to(device)
            model = nn.parallel.DistributedDataParallel(model)
    elif device.type == 'cuda':
        model = nn.DataParallel(model, device_ids=None).cuda()
    
    return model

参考文献:

[1]. Tr Nan Doan et al. (2014) developed the C3D model for generic video analysis.
[2]. Kaiming He and his team presented their work on deep residual learning at the 2016 IEEE Conference on Computer Vision and Pattern Recognition.
[3]. Kenya Hara and colleagues introduced a method using 3D residual networks to learn spatio-temporal features for action recognition at the 2017 IEEE International Conference on Computer Vision Workshops.

P.S: Last but not least

请来一批高水平的老师为我们带来复现顶会论文的课程,并提供了必要的免费GPU算力供我们在代码优化和参数调试等方面进行实践。初次参与学术会议的学习活动,在许多内容仍处于探索阶段时,请问如果有专家或同行对第五小节中的代码实现有深入理解……欢迎在评论区留言讨论!

如果你热爱深度学习,并渴望掌握前沿研究的能力,请 you 联合参与百度顶尖会议的论文复现工作坊。

课程链接:https://aistudio.baidu.com/aistudio/education/group/info/1340

全部评论 (0)

还没有任何评论哟~