torch_ecg.utils.compute_module_size

torch_ecg.utils.compute_module_size(module: Module, requires_grad: bool = True, include_buffers: bool = False, human: bool = False) int | str[source]

compute the size (number of parameters) of a Module.

Parameters:
  • module (torch.nn.Module) – The Module to compute the size.

  • requires_grad (bool, default True) – Whether to only count the parameters that require gradients.

  • include_buffers (bool, default False) – Whether to include the buffers. If requires_grad is True, then include_buffers is ignored.

  • human (bool, default False) – Size is returned in a way that is easy to read by a human, by appending a suffix corresponding to the unit (B, K, M, G, T, P).

Returns:

n_params – Size (number of parameters) of this Module, or a string representing the memory size.

Return type:

int or str

Examples

>>> import torch
>>> class Model(torch.nn.Sequential):
...     def __init__(self):
...         super().__init__()
...         self.add_module("linear", torch.nn.Linear(10, 20, dtype=torch.float16))
...         self.register_buffer("hehe", torch.ones(20, 2, dtype=torch.float64))
>>> model = Model()
>>> model.linear.weight.requires_grad_(False)
>>> compute_module_size(model)
20
>>> compute_module_size(model, requires_grad=False)
220
>>> compute_module_size(model, requires_grad=False, include_buffers=True)
260
>>> compute_module_size(model, requires_grad=False, include_buffers=True, human=True)
'0.7K'
>>> compute_module_size(model, requires_grad=False, include_buffers=False, human=True)
'0.4K'
>>> compute_module_size(model, human=True)
'40.0B'