pytorch實現(xiàn)focal loss的兩種方式小結
更新時間:2020年01月02日 09:52:18 作者:WYXHAHAHA123
今天小編就為大家分享一篇pytorch實現(xiàn)focal loss的兩種方式小結,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧
我就廢話不多說了,直接上代碼吧!
import torch
import torch.nn.functional as F
import numpy as np
from torch.autograd import Variable
'''
pytorch實現(xiàn)focal loss的兩種方式(現(xiàn)在討論的是基于分割任務)
在計算損失函數(shù)的過程中考慮到類別不平衡的問題,假設加上背景類別共有6個類別
'''
def compute_class_weights(histogram):
classWeights = np.ones(6, dtype=np.float32)
normHist = histogram / np.sum(histogram)
for i in range(6):
classWeights[i] = 1 / (np.log(1.10 + normHist[i]))
return classWeights
def focal_loss_my(input,target):
'''
:param input: shape [batch_size,num_classes,H,W] 僅僅經(jīng)過卷積操作后的輸出,并沒有經(jīng)過任何激活函數(shù)的作用
:param target: shape [batch_size,H,W]
:return:
'''
n, c, h, w = input.size()
target = target.long()
input = input.transpose(1, 2).transpose(2, 3).contiguous().view(-1, c)
target = target.contiguous().view(-1)
number_0 = torch.sum(target == 0).item()
number_1 = torch.sum(target == 1).item()
number_2 = torch.sum(target == 2).item()
number_3 = torch.sum(target == 3).item()
number_4 = torch.sum(target == 4).item()
number_5 = torch.sum(target == 5).item()
frequency = torch.tensor((number_0, number_1, number_2, number_3, number_4, number_5), dtype=torch.float32)
frequency = frequency.numpy()
classWeights = compute_class_weights(frequency)
'''
根據(jù)當前給出的ground truth label計算出每個類別所占據(jù)的權重
'''
# weights=torch.from_numpy(classWeights).float().cuda()
weights = torch.from_numpy(classWeights).float()
focal_frequency = F.nll_loss(F.softmax(input, dim=1), target, reduction='none')
'''
上面一篇博文講過
F.nll_loss(torch.log(F.softmax(inputs, dim=1),target)的函數(shù)功能與F.cross_entropy相同
可見F.nll_loss中實現(xiàn)了對于target的one-hot encoding編碼功能,將其編碼成與input shape相同的tensor
然后與前面那一項(即F.nll_loss輸入的第一項)進行 element-wise production
相當于取出了 log(p_gt)即當前樣本點被分類為正確類別的概率
現(xiàn)在去掉取log的操作,相當于 focal_frequency shape [num_samples]
即取出ground truth類別的概率數(shù)值,并取了負號
'''
focal_frequency += 1.0#shape [num_samples] 1-P(gt_classes)
focal_frequency = torch.pow(focal_frequency, 2) # torch.Size([75])
focal_frequency = focal_frequency.repeat(c, 1)
'''
進行repeat操作后,focal_frequency shape [num_classes,num_samples]
'''
focal_frequency = focal_frequency.transpose(1, 0)
loss = F.nll_loss(focal_frequency * (torch.log(F.softmax(input, dim=1))), target, weight=None,
reduction='elementwise_mean')
return loss
def focal_loss_zhihu(input, target):
'''
:param input: 使用知乎上面大神給出的方案 https://zhuanlan.zhihu.com/p/28527749
:param target:
:return:
'''
n, c, h, w = input.size()
target = target.long()
inputs = input.transpose(1, 2).transpose(2, 3).contiguous().view(-1, c)
target = target.contiguous().view(-1)
N = inputs.size(0)
C = inputs.size(1)
number_0 = torch.sum(target == 0).item()
number_1 = torch.sum(target == 1).item()
number_2 = torch.sum(target == 2).item()
number_3 = torch.sum(target == 3).item()
number_4 = torch.sum(target == 4).item()
number_5 = torch.sum(target == 5).item()
frequency = torch.tensor((number_0, number_1, number_2, number_3, number_4, number_5), dtype=torch.float32)
frequency = frequency.numpy()
classWeights = compute_class_weights(frequency)
weights = torch.from_numpy(classWeights).float()
weights=weights[target.view(-1)]#這行代碼非常重要
gamma = 2
P = F.softmax(inputs, dim=1)#shape [num_samples,num_classes]
class_mask = inputs.data.new(N, C).fill_(0)
class_mask = Variable(class_mask)
ids = target.view(-1, 1)
class_mask.scatter_(1, ids.data, 1.)#shape [num_samples,num_classes] one-hot encoding
probs = (P * class_mask).sum(1).view(-1, 1)#shape [num_samples,]
log_p = probs.log()
print('in calculating batch_loss',weights.shape,probs.shape,log_p.shape)
# batch_loss = -weights * (torch.pow((1 - probs), gamma)) * log_p
batch_loss = -(torch.pow((1 - probs), gamma)) * log_p
print(batch_loss.shape)
loss = batch_loss.mean()
return loss
if __name__=='__main__':
pred=torch.rand((2,6,5,5))
y=torch.from_numpy(np.random.randint(0,6,(2,5,5)))
loss1=focal_loss_my(pred,y)
loss2=focal_loss_zhihu(pred,y)
print('loss1',loss1)
print('loss2', loss2)
'''
in calculating batch_loss torch.Size([50]) torch.Size([50, 1]) torch.Size([50, 1])
torch.Size([50, 1])
loss1 tensor(1.3166)
loss2 tensor(1.3166)
'''
以上這篇pytorch實現(xiàn)focal loss的兩種方式小結就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關文章
Tensorflow安裝問題: Could not find a version that satisfies the
這篇文章主要介紹了Tensorflow安裝問題: Could not find a version that satisfies the requirement tensorflow,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧2020-04-04
Python Datetime模塊和Calendar模塊用法實例分析
這篇文章主要介紹了Python Datetime模塊和Calendar模塊用法,結合實例形式分析了Python日期時間及日歷相關的Datetime模塊和Calendar模塊原理、用法及操作注意事項,需要的朋友可以參考下2019-04-04
Anaconda安裝時默認python版本改成其他版本的兩種方式
這篇文章主要給大家介紹了關于Anaconda安裝時默認python版本改成其他版本的兩種方式,anaconda是一個非常好用的python發(fā)行版本,其中包含了大部分常用的庫,需要的朋友可以參考下2023-10-10
Python Pandas list列表數(shù)據(jù)列拆分成多行的方法實現(xiàn)
這篇文章主要介紹了Python Pandas list(列表)數(shù)據(jù)列拆分成多行的方法,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧2020-12-12

