MobileNetV2

class torch_ecg.models.MobileNetV2(in_channels: int, **config: CFG)[source]

Bases: Sequential, SizeMixin, CitationMixin

MobileNet V2.

MobileNet V2 is an upgraded version of MobileNet V1, originally proposed in [1]. It uses inverted residual blocks instead of the original residual blocks. Torchvision’s implementation [#v2_pt] and Keras’ implementation [3] are used as references.

Parameters:
  • in_channels (int) – Number of channels in the input signal tensor.

  • config (dict) –

    Other hyper-parameters of the Module, ref. corr. config file keyword arguments that have to be set are as follows:

    • groups: int, number of groups in the pointwise convolutional layer(s).

    • norm: bool or str or Module, normalization layer.

    • activation: str or Module, activation layer.

    • bias: bool, whether to use bias in the convolutional layer(s).

    • width_multiplier: float, multiplier of the number of output channels of the pointwise convolution.

    • stem: CFG, config of the stem block, with the following keys:

      • num_filters: int or Sequence[int], number of filters in the first convolutional layer(s).

      • filter_lengths: int or Sequence[int], filter lengths (kernel sizes) in the first convolutional layer(s).

      • subsample_lengths: int or Sequence[int], subsample lengths (strides) in the first convolutional layer(s).

    • inv_res: CFG, Config of the inverted residual blocks, with the following keys:

      • expansions: Sequence[int], expansion ratios of the inverted residual blocks.

      • out_channels: Sequence[int], number of output channels in each block.

      • n_blocks: Sequence[int], number of inverted residual blocks.

      • strides: Sequence[int], strides of the inverted residual blocks.

      • filter_lengths: Sequence[int], filter lengths (kernel sizes) in each block.

    • exit_flow: CFG, Config of the exit flow blocks, with the following keys:

      • num_filters: int or Sequence[int], number of filters in the final convolutional layer(s).

      • filter_lengths: int or Sequence[int], filter lengths (kernel sizes) in the final convolutional layer(s).

      • subsample_lengths: int or Sequence[int], subsample lengths (strides) in the final convolutional layer(s).

References

compute_output_shape(seq_len: int | None = None, batch_size: int | None = None) Sequence[int | None][source]

Compute the output shape of the model.

Parameters:
  • seq_len (int, optional) – Length of the input tensors.

  • batch_size (int, optional) – Batch size of the input tensors.

Returns:

output_shape – The output shape of the module.

Return type:

sequence