elektronn3.modules.lovasz_losses module¶
Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)
-
class
elektronn3.modules.lovasz_losses.
StableBCELoss
(*args: Any, **kwargs: Any)[source]¶ Bases:
torch.nn.modules.
-
elektronn3.modules.lovasz_losses.
binary_xloss
(logits, labels, ignore=None)[source]¶ - Binary Cross entropy loss
logits: [B, H, W] Variable, logits at each pixel (between -infty and +infty) labels: [B, H, W] Tensor, binary ground truth masks (0 or 1) ignore: void class id
-
elektronn3.modules.lovasz_losses.
flatten_binary_scores
(scores, labels, ignore=None)[source]¶ Flattens predictions in the batch (binary case) Remove labels equal to ‘ignore’
-
elektronn3.modules.lovasz_losses.
flatten_probas
(probas, labels, ignore=None)[source]¶ Flattens predictions in the batch
-
elektronn3.modules.lovasz_losses.
iou
(preds, labels, C, EMPTY=1.0, ignore=None, per_image=False)[source]¶ Array of IoU for each (non ignored) class
-
elektronn3.modules.lovasz_losses.
iou_binary
(preds, labels, EMPTY=1.0, ignore=None, per_image=True)[source]¶ IoU for foreground class binary: 1 foreground, 0 background
-
elektronn3.modules.lovasz_losses.
lovasz_grad
(gt_sorted)[source]¶ Computes gradient of the Lovasz extension w.r.t sorted errors See Alg. 1 in paper
-
elektronn3.modules.lovasz_losses.
lovasz_hinge
(logits, labels, per_image=True, ignore=None)[source]¶ - Binary Lovasz hinge loss
logits: [B, H, W] Variable, logits at each pixel (between -infty and +infty) labels: [B, H, W] Tensor, binary ground truth masks (0 or 1) per_image: compute the loss per image instead of per batch ignore: void class id
-
elektronn3.modules.lovasz_losses.
lovasz_hinge_flat
(logits, labels)[source]¶ - Binary Lovasz hinge loss
logits: [P] Variable, logits at each prediction (between -infty and +infty) labels: [P] Tensor, binary ground truth labels (0 or 1) ignore: label to ignore
-
elektronn3.modules.lovasz_losses.
lovasz_softmax
(probas, labels, only_present=False, per_image=False, ignore=None)[source]¶ - Multi-class Lovasz-Softmax loss
probas: [B, C, H, W] Variable, class probabilities at each prediction (between 0 and 1) labels: [B, H, W] Tensor, ground truth labels (between 0 and C - 1) only_present: average only on num_classes present in ground truth per_image: compute the loss per image instead of per batch ignore: void class labels
-
elektronn3.modules.lovasz_losses.
lovasz_softmax_flat
(probas, labels, only_present=False)[source]¶ - Multi-class Lovasz-Softmax loss
probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1) labels: [P] Tensor, ground truth labels (between 0 and C - 1) only_present: average only on num_classes present in ground truth