半监督医学图像分割(一):CANet(2023)Context-aware network fusing transformer and V-Net for semi-supervised
基于上下文感知地融合3D左心房的半监督分割网络是一种整合Transformer和V-Net的技术用于实现半监督学习
-
研究背景及动机
-
- 背景:
- 动机:
-
核心成果
-
方法
-
DAM(discriminator with attention mechanism)基于注意力机制的鉴别器
- 学习机制
-
总结
-
研究背景及动机
背景:
对于医学专家来说, 他们完成可靠标注工作不仅耗费大量时间和精力, 同时也会面临较高的时间成本, 更为关键的是, 由于这一过程受主观判断影响, 所获得的人工标注结果可能会导致不一致的分割结果。
医疗机构中往往有大量的未被标注的数据, 合理利用这些未标注的数据能够发挥其潜在价值
动机:
3D医学图像由一组切片组成:模型不仅需要从单个切片中提取内部上下文信息,并且还需要从多个切片中提取不同组织和部位之间的关联信息。
现有的方法通常仅能够同时利用两种信息。
主要贡献
- 成功地将Transformer成功地融入到VNet架构中
- 详细设计了具有注意力机制的鉴别器,并成功引入了基于形状和位置的先验知识
- 在LA分割任务中表现出显著提升效果的同时
方法

在VNet瓶颈使用Transformer提取全局上下文信息
def TransformerLayer(self, features):
x5 = features[4]
embedding_output = self.embeddings(x5)
transformer_output, attn_weights = self.transformer(embedding_output)
detransformer_output = self.detransformer(transformer_output)
features[4] = detransformer_output
return features
AI生成项目python
通过位置码化的方式对编码器最终一层生成的特征向量x5进行处理,在经过12层自注意力机制后返回给编码器以生成分割标签
DAM(discriminator with attention mechanism)带有注意力机制的鉴别器
该网络架构包含5个卷积层及一个MLP结构,在原有基础上增加了改进型的SENet模块以提升鉴别器性能
class FC3DDiscriminator(nn.Module):
def __init__(self, num_classes, ndf=64, n_channel=1):
super(FC3DDiscriminator, self).__init__()
# downsample 16
self.conv0 = nn.Conv3d(num_classes, ndf, kernel_size=4, stride=2, padding=1)
self.conv1 = nn.Conv3d(n_channel, ndf, kernel_size=4, stride=2, padding=1)
self.conv2 = nn.Conv3d(ndf, ndf*2, kernel_size=4, stride=2, padding=1)
self.conv3 = nn.Conv3d(ndf*2, ndf*4, kernel_size=4, stride=2, padding=1)
self.conv4 = nn.Conv3d(ndf*4, ndf*8, kernel_size=4, stride=2, padding=1)
self.avgpool = nn.AvgPool3d((7, 7, 5))
self.classifier = nn.Linear(ndf*8, 2)
self.leaky_relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
self.dropout = nn.Dropout3d(0.5)
self.Softmax = nn.Softmax()
def forward(self, map, image):
batch_size = map.shape[0]
map_feature = self.conv0(map)
image_feature = self.conv1(image)
x = torch.add(map_feature, image_feature)
x = self.leaky_relu(x)
x = self.dropout(x)
x = self.conv2(x)
x = self.leaky_relu(x)
x = self.dropout(x)
x = self.conv3(x)
x = self.leaky_relu(x)
x = self.dropout(x)
x = self.conv4(x)
x = self.leaky_relu(x)
se = SEAttention(channel=512, reduction=8).cuda()
# output = se(x)
x = se(x)
x = self.avgpool(x)
x = x.view(batch_size, -1)
x = self.classifier(x)
x = x.reshape((batch_size, 2))
# x = self.Softmax(x)
return x
AI生成项目python

本文在复现的时候使用SE注意力的效果反而比不使用的效果要差
x9 = self.block_nine(x8_up)
if self.has_dropout:
x9 = self.dropout(x9)
out = self.out_conv(x9)
out_tanh = self.tanh(out)
out_seg = self.out_conv2(x9)
AI生成项目python
在编码器的最一层使用tanh激活函数将将元素调整到区间(-1,1)内
for i_batch, sampled_batch in enumerate(trainloader):
time2 = time.time()
volume_batch, label_batch = sampled_batch['image'], sampled_batch['label']
volume_batch, label_batch = volume_batch.cuda(), label_batch.cuda()
# Generate Discriminator target based on sampler
Dtarget = torch.tensor([1, 1, 0, 0]).cuda()
model.train()
D.eval()
outputs_tanh, outputs = model(volume_batch)
outputs_soft = torch.sigmoid(outputs)
AI生成项目python

将batchsize设为4,在每次训练过程中输入两张带标注和两张不带标注的图片;其中Dtarget代表判别器的标记用于判断DAM来自带标注还是无标注的数据集
学习策略
损失函数
有监督的损失=分割损失+αLSM损失

## calculate the loss
with torch.no_grad():
gt_dis = compute_sdf(label_batch[:].cpu().numpy(), outputs[:labeled_bs, 0, ...].shape)
gt_dis = torch.from_numpy(gt_dis).float().cuda()
loss_sdf = mse_loss(outputs_tanh[:labeled_bs, 0, ...], gt_dis)
loss_seg = ce_loss(outputs[:labeled_bs, 0, ...], label_batch[:labeled_bs].float())
loss_seg_dice = losses.dice_loss(outputs_soft[:labeled_bs, 0, :, :, :], label_batch[:labeled_bs] == 1)
consistency_weight = get_current_consistency_weight(iter_num//150)
supervised_loss = loss_seg_dice + args.beta * loss_sdf
Doutputs = D(outputs_tanh[labeled_bs:], volume_batch[labeled_bs:])
# G want D to misclassify unlabel data to label data.
loss_adv = F.cross_entropy(Doutputs, (Dtarget[:labeled_bs]).long())
loss = supervised_loss + consistency_weight*loss_adv
optimizer.zero_grad()
loss.backward()
optimizer.step()
dc = metrics.dice(torch.argmax(outputs_soft[:labeled_bs], dim=1), label_batch[:labeled_bs])
AI生成项目python

有关鉴别器可参考这篇文章有具体讲解:
半监督3D医学图像分割(四):SASSNet
总结
在对一些半监督学习方案进行了深入研究后,在约2至3个小时内完成全部模型参数更新。通过对比分析发现,在现有标注数据基础上进行优化后所得模型各项性能指标与现有标注数据相比差距不超过0.1至0.2
