elektronn3.data.coord_transforms module

exception elektronn3.data.coord_transforms.WarpingOOBError(*args, **kwargs)[source]

Bases: ValueError

Raised when transformed coordinates are refer to out-of-bounds areas.

This is expected to happen a lot when using random warping, but is caught early on before reading data. The dataset iterator is expected to handle this exception by just retrying the same call again, which will re-randomize the transformation.

elektronn3.data.coord_transforms.get_warped_coord_transform(inp_src_shape, patch_shape, aniso_factor=2, sample_aniso=True, warp_amount=1.0, lock_z=True, no_x_flip=False, perspective=False, target_src_shape=None, target_patch_shape=None)[source]

Generates the warping transformation parameters and composes them into a single 4D homogeneous transformation matrix M. Assumes 3-dimensional (volumetric) source data with shape (D, H, W) or (…, D, H, W). Preceding dimensions before the last three dimensions are ignored, if there are any (e.g. a C dimension that contains input channels).

  • inp_src_shape (Union[Tuple, ndarray]) – Input data source shape

  • patch_shape (Union[Tuple, ndarray]) – Patch shape (spatial shape of the neural network’s input node)

  • aniso_factor (int) – Anisotropy factor that determines an additional scaling in z direction.

  • sample_aniso (bool) – Scale z coordinates by 1 / aniso_factor while warping.

  • warp_amount (float) – Strength of the random warping transformation. A lower warp_amount will lead to less distorted images.

  • lock_z (bool) – Exclude z coordinates from the random warping transformations.

  • no_x_flip (bool) – Don’t flip x axis during random warping.

  • perspective (bool) – Apply perspective transformations (in addition to affine ones).

  • target_src_shape (Union[Tuple, ndarray, None]) – Target data source shape

  • target_patch_shape (Union[Tuple, ndarray, None]) – Target patch shape


Coordinate transformation matrix.

Return type


elektronn3.data.coord_transforms.warp_slice(inp_src, patch_shape, M, target_src=None, target_patch_shape=None, target_discrete_ix=None, debug=False)[source]

Cuts a warped slice out of the input image and out of the target_src image. Warping is applied by multiplying the original source coordinates with the inverse of the homogeneous (forward) transformation matrix M.

“Source coordinates” (src_coords) signify the coordinates of voxels in inp_src and target_src that are used to compose their respective warped versions. The idea here is that not the images themselves, but the coordinates from where they are read are warped. This allows for much higher efficiency for large image volumes because we don’t have to calculate the expensive warping transform for the whole image, but only for the voxels that we eventually want to use for the new warped image. The transformed coordinates usually don’t align to the discrete voxel grids of the original images (meaning they are not integers), so the new voxel values are obtained by linear interpolation.

  • inp_src (DataSource) – Input image source (in HDF5)

  • patch_shape (Union[Tuple[int, …], ndarray]) – (spatial only) Patch shape (D, H, W) (spatial shape of the neural network’s input node)

  • M (ndarray) – Forward warping tansformation matrix (4x4). Must contain translations in source and target_src array.

  • target_src (Optional[DataSource]) – Optional target source array to be extracted from in the same way.

  • target_patch_shape (Union[Tuple[int], ndarray, None]) – Patch size for the target_src array.

  • target_discrete_ix (Optional[Sequence[int]]) – List of target channels that contain discrete values. By default (None), every channel is is seen as discrete (this is generally the case for classification tasks). This information is used to decide what kind of interpolation should be used for reading target data: - discrete targets are obtained by nearest-neighbor interpolation - non-discrete (continuous) targets are linearly interpolated.

  • debug (If True (default), enable additional sanity checks to catch) – warping issues early.

Return type

Tuple[ndarray, Optional[ndarray]]


  • inp – Warped input image slice

  • target – Warped target_src image slice or None, if target_src is None.