elektronn3.modules.l1batchnorm module

class elektronn3.modules.l1batchnorm.L1BatchNorm(num_features, momentum=0.9)[source]

Bases: torch.nn.

L1-Norm-based Batch Normalization module.

Use with caution. This code is not extensively tested.

References: - https://arxiv.org/abs/1802.09769 - https://arxiv.org/abs/1803.01814

class elektronn3.modules.l1batchnorm.L1GroupNorm(*args: Any, **kwargs: Any)[source]

Bases: torch.nn.

Applies L1 Group Normalization over a mini-batch of inputs.

This works in the same way as torch.nn.GroupNorm, but uses the scaled L1 norm instead of the L2 norm for better numerical stability, performance and half precision support. L1 batch normalization was proposed in

This layer uses statistics computed from input data in both training and evaluation modes.

  • num_groups (int) – number of groups to separate the channels into

  • num_channels (int) – number of channels expected in input

  • eps – a value added to the denominator for numerical stability. Default: 1e-5

  • affine – a boolean value that when set to True, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default: True.

  • Input: (N, C, *) where C=\text{num\_channels}

  • Output: (N, C, *) (same shape as input)

elektronn3.modules.l1batchnorm.l1_group_norm(x, num_groups, weight, bias, eps)[source]