DenseNet¶
- class torch_ecg.models.DenseNet(in_channels: int, **config)[source]¶
Bases:
Sequential
,SizeMixin
,CitationMixin
The core part of the SOTA model (framework) of CPSC2020.
DenseNet is originally proposed in [1], [2] (journal version). The original implementation is available at [3] and [4]. [5] is an unofficial implementation of DenseNet in PyTorch. Torchvision also provides an implementation of DenseNet [6].
DenseNet is not only successful in image classification, but also in various ECG-related tasks, and is the core part of the SOTA model (framework) of CPSC2020.
- Parameters:
in_channels (int) – Number of features (channels) of the input.
config (dict) –
Other hyper-parameters of the Module, ref. corresponding config file. Keyword arguments that must be set are as follows:
num_layers: sequence of int, number of building block layers of each dense (macro) block
init_num_filters: sequence of int, number of filters of the first convolutional layer
init_filter_length: sequence of int, filter length (kernel size) of the first convolutional layer
init_conv_stride: int, stride of the first convolutional layer
init_pool_size: int, pooling kernel size of the first pooling layer
init_pool_stride: int, pooling stride of the first pooling layer
growth_rates: int or sequence of int or sequence of sequences of int, growth rates of the building blocks, with granularity to the whole network, or to each dense (macro) block, or to each building block
filter_lengths: int or sequence of int or sequence of sequences of int, filter length(s) (kernel size(s)) of the convolutions, with granularity to the whole network, or to each macro block, or to each building block
subsample_lengths: int or sequence of int, subsampling length(s) (ratio(s)) of the transition blocks
compression: float, compression factor of the transition blocks
bn_size: int, bottleneck base width, used only when building block is
DenseBottleNeck
dropouts: float or dict, dropout ratio of each building block
groups: int, connection pattern (of channels) of the inputs and outputs
block: dict, other parameters that can be set for the building blocks
For a full list of configurable parameters, ref. corr. config file
TODO
For groups > 1, the concatenated output should be re-organized in the channel dimension?
memory-efficient mode, i.e. storing the new_features in a shared memory instead of stacking in newly created
Tensor
after each mini-block.
References