Advertisement

Balanced Meta-Softmax for Long-Tailed Visual Recognition

阅读量:

前人之述备矣

复制代码
    def balanced_softmax_loss(labels, logits, sample_per_class, reduction):
    """Compute the Balanced Softmax Loss between `logits` and the ground truth `labels`.
    Args:
      labels: A int tensor of size [batch].
      logits: A float tensor of size [batch, no_of_classes].
      sample_per_class: A int tensor of size [no of classes].
      reduction: string. One of "none", "mean", "sum"
    Returns:
      loss: A float tensor. Balanced Softmax Loss.
    """
    spc = sample_per_class.type_as(logits)
    spc = spc.unsqueeze(0).expand(logits.shape[0], -1)
    logits = logits + spc.log()
    loss = F.cross_entropy(input=logits, target=labels, reduction=reduction)
    return loss

这种策略能够更加有效地聚焦于少数类别。对于少数类样本而言,其取对数后的值较小会导致对应的logits输出较大。大致如此。文章中还提到了一种叫做'元采样器'的技术细节尚不明确。

全部评论 (0)

还没有任何评论哟~