fl_sim.utils#
This module contains various utilities for the fl-sim
package.
fl_sim.utils.loggers#
This module contains various loggers.
- class fl_sim.utils.loggers.BaseLogger[source]#
-
Abstract base class of all loggers.
- epoch_end(epoch: int) None [source]#
Actions to be performed at the end of each epoch.
- Parameters:
epoch (int) – The number of the current epoch.
- Return type:
None
- epoch_start(epoch: int) None [source]#
Actions to be performed at the start of each epoch.
- Parameters:
epoch (int) – The number of the current epoch.
- Return type:
None
- abstract classmethod from_config(config: Dict[str, Any]) Any [source]#
Create a logger instance from a configuration.
- abstract log_metrics(client_id: int | None, metrics: Dict[str, Real | Tensor], step: int | None = None, epoch: int | None = None, part: str = 'val') None [source]#
Log metrics.
- Parameters:
client_id (int) – Index of the client,
None
for the server.metrics (dict) – The metrics to be logged.
step (int, optional) – The current number of (global) steps of training.
epoch (int, optional) – The current epoch number of training.
part (str, default "val") – The part of the training data the metrics computed from, can be
"train"
or"val"
or"test"
, etc.
- Return type:
None
- class fl_sim.utils.loggers.CSVLogger(algorithm: str, dataset: str, model: str, log_dir: Path | str | None = None, log_suffix: str | None = None, verbose: int = 1)[source]#
Bases:
BaseLogger
Logger that logs to a CSV file.
- Parameters:
algorithm (str) – Used to form the prefix of the log file.
dataset (str) – Used to form the prefix of the log file.
model (str) – Used to form the prefix of the log file.
log_dir (str or pathlib.Path, optional) – Directory to save the log file
log_suffix (str, optional) – Suffix of the log file.
verbose (int, default 1) – The verbosity level. Not used in this logger, but is kept for compatibility with other loggers.
- classmethod from_config(config: Dict[str, Any]) CSVLogger [source]#
Create a
CSVLogger
instance from a configuration.
- log_metrics(client_id: int | None, metrics: Dict[str, Real | Tensor], step: int | None = None, epoch: int | None = None, part: str = 'val') None [source]#
Log metrics.
- Parameters:
client_id (int) – Index of the client,
None
for the server.metrics (dict) – The metrics to be logged.
step (int, optional) – The current number of (global) steps of training.
epoch (int, optional) – The current epoch number of training.
part (str, default "val") – The part of the training data the metrics computed from, can be
"train"
or"val"
or"test"
, etc.
- Return type:
None
- class fl_sim.utils.loggers.JsonLogger(algorithm: str, dataset: str, model: str, fmt: str = 'json', log_dir: Path | str | None = None, log_suffix: str | None = None, verbose: int = 1)[source]#
Bases:
BaseLogger
Logger that logs to a JSON file, or a yaml file.
The structure is as follows for example:
- Parameters:
algorithm (str) – Used to form the prefix of the log file.
dataset (str) – Used to form the prefix of the log file.
model (str) – Used to form the prefix of the log file.
fmt ({"json", "yaml"}, optional) – Format of the log file.
log_dir (str or pathlib.Path, optional) – Directory to save the log file
log_suffix (str, optional) – Suffix of the log file.
verbose (int, default 1) – The verbosity level. Not used in this logger, but is kept for compatibility with other loggers.
- classmethod from_config(config: Dict[str, Any]) JsonLogger [source]#
Create a
JsonLogger
instance from a configuration.- Parameters:
config (dict) –
Configuration for the logger. The following keys are used:
"algorithm"
:str
, name of the algorithm."dataset"
:str
, name of the dataset."model"
:str
, name of the model."fmt"
: {“json”, “yaml”}, optional, format of the log file, default:"json"
."log_dir"
:str
orpathlib.Path
, optional, directory to save the log file."log_suffix"
:str
, optional, suffix of the log file.
- Returns:
A
JsonLogger
instance.- Return type:
- log_metrics(client_id: int | None, metrics: Dict[str, Real | Tensor], step: int | None = None, epoch: int | None = None, part: str = 'val') None [source]#
Log metrics.
- Parameters:
client_id (int) – Index of the client,
None
for the server.metrics (dict) – The metrics to be logged.
step (int, optional) – The current number of (global) steps of training.
epoch (int, optional) – The current epoch number of training.
part (str, default "val") – The part of the training data the metrics computed from, can be
"train"
or"val"
or"test"
, etc.
- Return type:
None
- class fl_sim.utils.loggers.LoggerManager(algorithm: str, dataset: str, model: str, log_dir: Path | str | None = None, log_suffix: str | None = None, verbose: int = 1)[source]#
Bases:
ReprMixin
Manager for loggers.
- Parameters:
algorithm (str) – Used to form the prefix of the log file.
dataset (str) – Used to form the prefix of the log file.
model (str) – Used to form the prefix of the log file.
log_dir (str or pathlib.Path, optional) – Directory to save the log file
log_suffix (str, optional) – Suffix of the log file.
verbose (int, default 1) – The verbosity level.
- epoch_end(epoch: int) None [source]#
Actions to be performed at the end of each epoch.
- Parameters:
epoch (int) – The number of the current epoch.
- Return type:
None
- epoch_start(epoch: int) None [source]#
Actions to be performed at the start of each epoch.
- Parameters:
epoch (int) – The number of the current epoch.
- Return type:
None
- classmethod from_config(config: Dict[str, Any]) LoggerManager [source]#
Create a
LoggerManager
instance from a configuration.- Parameters:
config (dict) –
Configuration of the logger manager. The following keys are used:
"algorithm"
:str
, algorithm name."dataset"
:str
, dataset name."model"
:str
, model name."log_dir"
:str
orpathlib.Path
, optional, directory to save the log files."log_suffix"
:str
, optional, suffix of the log files."txt_logger"
:bool
, optional, whether to add aTxtLogger
instance."csv_logger"
:bool
, optional, whether to add aCSVLogger
instance."json_logger"
:bool
, optional, whether to add aJsonLogger
instance."fmt"
: {“json”, “yaml”}, optional, format of the json log file, default:"json"
, valid when"json_logger"
isTrue
."verbose"
:int
, optional, verbosity level of the logger manager.
- Returns:
A
LoggerManager
instance.- Return type:
- log_metrics(client_id: int | None, metrics: Dict[str, Real | Tensor], step: int | None = None, epoch: int | None = None, part: str = 'val') None [source]#
Log a message.
- property loggers: List[BaseLogger]#
The list of loggers.
- class fl_sim.utils.loggers.TxtLogger(algorithm: str, dataset: str, model: str, log_dir: Path | str | None = None, log_suffix: str | None = None, verbose: int = 1)[source]#
Bases:
BaseLogger
Logger that logs to a text file.
- Parameters:
algorithm (str) – Used to form the prefix of the log file.
dataset (str) – Used to form the prefix of the log file.
model (str) – Used to form the prefix of the log file.
log_dir (str or pathlib.Path, optional) – Directory to save the log file. If
None
, use the default log directory. If not absolute, useDEFAULT_LOG_DIR/log_dir
.log_suffix (str, optional) – Suffix of the log file.
verbose (int, default 1) – The verbosity level.
- epoch_end(epoch: int) None [source]#
Actions to be performed at the end of each epoch.
- Parameters:
epoch (int) – The number of the current epoch.
- Return type:
None
- epoch_start(epoch: int) None [source]#
Actions to be performed at the start of each epoch.
- Parameters:
epoch (int) – The number of the current epoch.
- Return type:
None
- classmethod from_config(config: Dict[str, Any]) TxtLogger [source]#
Create a
TxtLogger
instance from a configuration.
- log_metrics(client_id: int | None, metrics: Dict[str, Real | Tensor], step: int | None = None, epoch: int | None = None, part: str = 'val') None [source]#
Log metrics.
- Parameters:
client_id (int) – Index of the client,
None
for the server.metrics (dict) – The metrics to be logged.
step (int, optional) – The current number of (global) steps of training.
epoch (int, optional) – The current epoch number of training.
part (str, default "val") – The part of the training data the metrics computed from, can be
"train"
or"val"
or"test"
, etc.
- Return type:
None
fl_sim.utils.imports#
This module contains utilities for dynamic imports.
- fl_sim.utils.imports.load_module_from_file(file_path: str | Path) module [source]#
Load a module from a file.
- Parameters:
file_path (str or pathlib.Path) – The path of the file.
- Returns:
The loaded module.
- Return type:
fl_sim.utils.misc#
This module provides miscellaneous utilities.
- fl_sim.utils.misc.add_kwargs(func: callable, **kwargs: Any) callable [source]#
Add keyword arguments to a function.
This function is used to add keyword arguments to a function in order to make it compatible with other functions。
- Parameters:
func (callable) – The function to be decorated.
kwargs (dict) – The keyword arguments to be added.
- Returns:
The decorated function, with the keyword arguments added.
- Return type:
callable
- fl_sim.utils.misc.clear_logs(pattern: str = '*', directory: Path | str | None = None) None [source]#
Clear given log files in given directory.
- Parameters:
pattern (str, optional) – Pattern of log files to be cleared, by default “*” The searching will be executed by
pathlib.Path.rglob()
.directory (str or Path, optional) – Directory to be searched, by default the default log directory
LOG_DIR
will be used. If the given directory is not absolute, it will be joined withLOG_DIR
.
- fl_sim.utils.misc.default_dict_to_dict(d: defaultdict | dict | list | tuple) dict | list [source]#
Convert default dict to dict.
- fl_sim.utils.misc.find_longest_common_substring(strings: Sequence[str], min_len: int | None = None, ignore: str | None = None) str [source]#
Find the longest common substring of a list of strings.
- fl_sim.utils.misc.get_scheduler(scheduler_name: str, optimizer: Optimizer, config: dict | None) _LRScheduler [source]#
Get learning rate scheduler.
- Parameters:
scheduler_name (str) – Name of the scheduler.
optimizer (torch.optim.Optimizer) – Optimizer.
config (dict) – Configuration of the scheduler.
- Returns:
Learning rate scheduler.
- Return type:
torch.optim.lr_scheduler._LRScheduler
- fl_sim.utils.misc.get_scheduler_info(scheduler_name: str) dict [source]#
Get information of the scheduler, including the required and optional configs.
- fl_sim.utils.misc.is_notebook() bool [source]#
Check if the current environment is a notebook (Jupyter or Colab).
Implementation adapted from [1].
- Parameters:
None –
- Returns:
Whether the code is running in a notebook
- Return type:
References
- fl_sim.utils.misc.make_serializable(x: ndarray | generic | dict | list | tuple) list | dict | Number [source]#
Make an object serializable.
This function is used to convert all numpy arrays to list in an object, and also convert numpy data types to python data types in the object, so that it can be serialized by
json
.- Parameters:
x (Union[numpy.ndarray, numpy.generic, dict, list, tuple]) – Input data, which can be numpy array (or numpy data type), or dict, list, tuple containing numpy arrays (or numpy data type).
- Returns:
Converted data.
- Return type:
Union[list, dict, numbers.Number]
Examples
>>> import numpy as np >>> from fl_sim.utils.misc import make_serializable >>> x = np.array([1, 2, 3]) >>> make_serializable(x) [1, 2, 3] >>> x = {"a": np.array([1, 2, 3]), "b": np.array([4, 5, 6])} >>> make_serializable(x) {'a': [1, 2, 3], 'b': [4, 5, 6]} >>> x = [np.array([1, 2, 3]), np.array([4, 5, 6])] >>> make_serializable(x) [[1, 2, 3], [4, 5, 6]] >>> x = (np.array([1, 2, 3]), np.array([4, 5, 6]).mean()) >>> obj = make_serializable(x) >>> obj [[1, 2, 3], 5.0] >>> type(obj[1]), type(x[1]) (float, numpy.float64)