CINC2020

class torch_ecg.databases.CINC2020(db_dir: str | bytes | PathLike | None = None, working_dir: str | bytes | PathLike | None = None, verbose: int = 1, **kwargs: Any)[source]

Bases: PhysioNetDataBase

Classification of 12-lead ECGs: the PhysioNet/Computing in Cardiology Challenge 2020

ABOUT

  1. There are 6 difference tranches of training data, listed as follows:

    1. 6,877 recordings from China Physiological Signal Challenge in 2018 (CPSC2018, see also [3]): PhysioNetChallenge2020_Training_CPSC.tar.gz

    2. 3,453 recordings from China 12-Lead ECG Challenge Database (unused data from CPSC2018 and NOT the CPSC2018 test data): PhysioNetChallenge2020_Training_2.tar.gz

    3. 74 recordings from the St Petersburg INCART 12-lead Arrhythmia Database: PhysioNetChallenge2020_Training_StPetersburg.tar.gz

    4. 516 recordings from the PTB Diagnostic ECG Database: PhysioNetChallenge2020_Training_PTB.tar.gz

    5. 21,837 recordings from the PTB-XL electrocardiography Database: PhysioNetChallenge2020_PTB-XL.tar.gz

    6. 10,344 recordings from a Georgia 12-Lead ECG Challenge Database: PhysioNetChallenge2020_Training_E.tar.gz

    In total, 43,101 labeled recordings of 12-lead ECGs from four countries (China, Germany, Russia, and the USA) across 3 continents have been posted publicly for this Challenge, with approximately the same number hidden for testing, representing the largest public collection of 12-lead ECGs. All files can be downloaded from [7] or [8].

  2. the A tranche training data comes from CPSC2018, whose folder name is Training_WFDB. The B tranche training data are unused training data of CPSC2018, having folder name Training_2. For these 2 tranches, ref. the docstring of database_reader.cpsc_databases.cpsc2018.CPSC2018

  3. C, D, E tranches of training data all come from corresponding PhysioNet dataset, whose details can be found in corresponding files:

    • C: INCARTDB, ref [4]

    • D: PTBDB, ref [5]

    • E: PTB_XL, ref [6]

    the C tranche has folder name Training_StPetersburg, the D tranche has folder name Training_PTB, the F tranche has folder name WFDB

  4. the F tranche is entirely new, posted for this Challenge, and represents a unique demographic of the Southeastern United States. It has folder name Training_E/WFDB.

  5. only a part of diagnosis_abbr (diseases that appear in the labels of the 6 tranches of training data) are used in the scoring function (ref. dx_mapping_scored_cinc2020), while others are ignored (ref. dx_mapping_unscored_cinc2020). The scored diagnoses were chosen based on prevalence of the diagnoses in the training data, the severity of the diagnoses, and the ability to determine the diagnoses from ECG recordings. The ignored diagnosis_abbr can be put in a a “non-class” group.

  6. the (updated) scoring function has a scoring matrix with nonzero off-diagonal elements. This scoring function reflects the clinical reality that some misdiagnoses are more harmful than others and should be scored accordingly. Moreover, it reflects the fact that confusing some classes is much less harmful than confusing other classes.

  7. sampling frequencies:

    1. (CPSC2018): 500 Hz

    2. (CPSC2018-2): 500 Hz

    3. (INCART): 257 Hz

    4. (PTB): 1000 Hz

    5. (PTB-XL): 500 Hz

    6. (Georgia): 500 Hz

  8. all data are recorded in the leads ordering of

    ["I", "II", "III", "aVR", "aVL", "aVF", "V1", "V2", "V3", "V4", "V5", "V6"]
    

    using for example the following code:

    >>> db_dir = "/media/cfs/wenhao71/data/cinc2020_data/"
    >>> working_dir = "./working_dir"
    >>> dr = CINC2020Reader(db_dir=db_dir,working_dir=working_dir)
    >>> set_leads = []
    >>> for tranche, l_rec in dr.all_records.items():
    ...     for rec in l_rec:
    ...         ann = dr.load_ann(rec)
    ...         leads = ann["df_leads"]["lead_name"].values.tolist()
    ...     if leads not in set_leads:
    ...         set_leads.append(leads)
    
  9. Challenge official website [1]. Webpage of the database on PhysioNet [2].

Note

  1. The datasets have been roughly processed to have a uniform format, hence differ from their original resource (e.g. differe in sampling frequency, sample duration, etc.)

  2. The original datasets might have richer metadata (especially those from PhysioNet), which can be fetched from corresponding reader’s docstring or website of the original source

  3. Each sub-dataset might have its own organizing scheme of data, which should be carefully dealt with

  4. There are few “absolute” diagnoses in 12 lead ECGs, where large discrepancies in the interpretation of the ECG can be found even inspected by experts. There is inevitably something lost in translation, especially when you do not have the context. This doesn”t mean making an algorithm isn’t important

  5. The labels are noisy, which one has to deal with in all real world data

  6. each line of the following classes are considered the same (in the scoring matrix):

    • RBBB, CRBBB (NOT including IRBBB)

    • PAC, SVPB

    • PVC, VPB

  7. unfortunately, the newly added tranches (C - F) have baseline drift and are much noisier. In contrast, CPSC data have had baseline removed and have higher SNR

  8. on Aug. 1, 2020, adc gain (including “resolution”, “ADC”? in .hea files) of datasets INCART, PTB, and PTB-xl (tranches C, D, E) are corrected. After correction, (the .tar files of) the 3 datasets are all put in a “WFDB” subfolder. In order to keep the structures consistant, they are moved into “Training_StPetersburg”, “Training_PTB”, “WFDB” as previously. Using the following code, one can check the adc_gain and baselines of each tranche:

    >>> db_dir = "/media/cfs/wenhao71/data/cinc2020_data/"
    >>> working_dir = "./working_dir"
    >>> dr = CINC2020(db_dir=db_dir,working_dir=working_dir)
    >>> resolution = {tranche: set() for tranche in "ABCDEF"}
    >>> baseline = {tranche: set() for tranche in "ABCDEF"}
    >>> for tranche, l_rec in dr.all_records.items():
    ...     for rec in l_rec:
    ...         ann = dr.load_ann(rec)
    ...         resolution[tranche] = resolution[tranche].union(set(ann["df_leads"]["adc_gain"]))
    ...         baseline[tranche] = baseline[tranche].union(set(ann["df_leads"]["baseline"]))
    >>> print(resolution, baseline)
    {"A": {1000.0}, "B": {1000.0}, "C": {1000.0}, "D": {1000.0}, "E": {1000.0}, "F": {1000.0}} {"A": {0}, "B": {0}, "C": {0}, "D": {0}, "E": {0}, "F": {0}}
    
  9. the .mat files all contain digital signals, which has to be converted to physical values using adc gain, basesline, etc. in corresponding .hea files. wfdb.rdrecord() has already done this conversion, hence greatly simplifies the data loading process. NOTE that there”s a difference when using wfdb.rdrecord: data from loadmat are in “channel_first” format, while wfdb.rdrecord.p_signal produces data in the “channel_last” format

  10. there are 3 equivalent (2 classes are equivalent if the corr. value in the scoring matrix is 1): (RBBB, CRBBB), (PAC, SVPB), (PVC, VPB)

  11. in the newly (Feb., 2021) created dataset, header files of each subset were gathered into one separate compressed file. This is due to the fact that updates on the dataset are almost always done in the header files. The correct usage of ref. [8], after uncompressing, is replacing the header files in the folder All_training_WFDB by header files from the 6 folders containing all header files from the 6 subsets.

Usage

  1. ECG arrhythmia detection

Issues

  1. reading the .hea files, baselines of all records are 0, however it is not the case if one plot the signal

  2. about half of the LAD records satisfy the “2-lead” criteria, but fail for the “3-lead” criteria, which means that their axis is (-30°, 0°) which is not truely LAD

  3. (Aug. 15, 2020; resolved, and changed to 1000) tranche F, the Georgia subset, has ADC gain 4880 which might be too high. Thus obtained voltages are too low. 1000 might be a suitable (correct) value of ADC gain for this tranche just as the other tranches.

  4. “E04603” (all leads), “E06072” (chest leads, epecially V1-V3), “E06909” (lead V2), “E07675” (lead V3), “E07941” (lead V6), “E08321” (lead V6) has exceptionally large values at rpeaks, reading (load_data) these two records using wfdb would bring in nan values. One can check using the following code

    >>> rec = "E04603"
    >>> dr.plot(rec, dr.load_data(rec, backend="scipy", units="uv"))  # currently raising error
    

References

Citation

10.1088/1361-6579/abc960 10.22489/cinc.2020.236 10.13026/F4AB-0814

Parameters:
  • db_dir (path-like, optional) – Storage path of the database. If not specified, data will be fetched from Physionet.

  • working_dir (path-like, optional) – Working directory, to store intermediate files and log files.

  • verbose (int, default 1) – Level of logging verbosity.

  • kwargs (dict, optional) – Auxilliary key word arguments.

property database_info: DataBaseInfo

The DataBaseInfo object of the database.

download() None[source]

Download the database from PhysioNet.

get_absolute_path(rec: str | int, extension: str | None = None) Path[source]

Get the absolute path of the record.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • extension (str, optional) – Extension of the file.

Returns:

abs_fp – Absolute path of the file.

Return type:

pathlib.Path

get_ann_filepath(rec: str | int, with_ext: bool = True) str[source]

Get the absolute file path of the annotation fileof the record.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • with_ext (bool, default True) – If True, the returned file path comes with file extension, otherwise without file extension.

Returns:

Absolute file path of the annotation file of the record.

Return type:

pathlib.Path

get_data_filepath(rec: str | int, with_ext: bool = True) Path[source]

Get the absolute file path of the data fileof the record.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • with_ext (bool, default True) – If True, the returned file path comes with file extension, otherwise without file extension.

Returns:

Absolute file path of the data file of the record.

Return type:

pathlib.Path

get_fs(rec: str | int) Real[source]

Get the sampling frequency of the record.

Parameters:

rec (str or int) – Record name or index of the record in all_records.

Returns:

fs – Sampling frequency of the record.

Return type:

numbers.Real

get_header_filepath(rec: str | int, with_ext: bool = True) Path[source]

Get the absolute file path of the header fileof the record.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • with_ext (bool, default True) – If True, the returned file path comes with file extension, otherwise without file extension.

Returns:

Absolute file path of the header file of the record.

Return type:

pathlib.Path

get_labels(rec: str | int, scored_only: bool = True, fmt: str = 's', normalize: bool = True) List[str][source]

Get labels (diagnoses or arrhythmias) of the record.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • scored_only (bool, default True) – If True, only get the labels that are scored in the CINC2020 official phase.

  • fmt (str, default "s") –

    Format of labels, one of the following (case insensitive):

    • ”a”, abbreviations

    • ”f”, full names

    • ”s”, SNOMED CT Code

  • normalize (bool, default True) – If True, the labels will be transformed into their equavalents, which are defined in utils.utils_misc.cinc2020_aux_data.py.

Returns:

labels – The list of labels of the record.

Return type:

List[str]

get_subject_id(rec: str | int) int[source]

Attach a unique subject ID for the record.

Parameters:

rec (str or int) – Record name or index of the record in all_records.

Returns:

Subject ID associated with the record.

Return type:

int

get_subject_info(rec: str | int, items: List[str] | None = None) dict[source]

Get auxiliary information of a subject (a record) stored in the header files.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • items (List[str], optional) – Items of the subject’s information (e.g. sex, age, etc.).

Returns:

subject_info – Information about the subject, including “age”, “sex”, “medical_prescription”, “history”, “symptom_or_surgery”.

Return type:

dict

get_tranche_class_distribution(tranches: Sequence[str], scored_only: bool = True) Dict[str, int][source]

Compute class distribution in the tranches.

Parameters:
  • tranches (Sequence[str]) – Tranche symbols (A-F).

  • scored_only (bool, default True) – If True, only classes that are scored in the CINC2020 official phase are considered for computing the distribution.

Returns:

distribution – Distribution of classes in the tranches. Keys are abbrevations of the classes, and values are appearance of corr. classes in the tranche.

Return type:

dict

load_ann(rec: str | int, raw: bool = False, backend: str = 'wfdb') dict | str[source]

Load annotations of the record.

The annotations are stored in the .hea files.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • raw (bool, default False) – If True, the raw annotations without parsing will be returned.

  • backend ({"wfdb", "naive"}, optional) – If is “wfdb”, wfdb.rdheader() will be used to load the annotations. If is “naive”, annotations will be parsed from the lines read from the header files.

Returns:

ann_dict – The annotations with items listed in self.ann_items.

Return type:

dict or str

load_data(rec: str | int, leads: str | int | Sequence[int | str] | None = None, data_format: str = 'channel_first', backend: str = 'wfdb', units: str | None = 'mV', fs: Real | None = None, return_fs: bool = False) ndarray | Tuple[ndarray, Real][source]

Load physical (converted from digital) ECG data, which is more understandable for humans; or load digital signal directly.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • leads (str or int or List[str] or List[int], optional) – The leads of the ECG data to be loaded.

  • data_format (str, default "channel_first") – Format of the ECG data, “channel_last” (alias “lead_last”), or “channel_first” (alias “lead_first”).

  • backend ({"wfdb", "scipy"}, optional) – The backend data reader, by default “wfdb”.

  • units (str or None, default "mV") – Units of the output signal, can also be “μV” (aliases “uV”, “muV”). None for digital data, without digital-to-physical conversion.

  • fs (numbers.Real, optional) – Sampling frequency of the output signal. If not None, the loaded data will be resampled to this frequency, otherwise, the original sampling frequency will be used.

  • return_fs (bool, default False) – Whether to return the sampling frequency of the output signal.

Returns:

  • data (numpy.ndarray) – The ECG data of the record.

  • data_fs (numbers.Real, optional) – Sampling frequency of the output signal. Returned if return_fs is True.

load_header(rec: str | int, raw: bool = False) dict | str[source]

Load annotations of the record.

The annotations are stored in the .hea files.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • raw (bool, default False) – If True, the raw annotations without parsing will be returned.

  • backend ({"wfdb", "naive"}, optional) – If is “wfdb”, wfdb.rdheader() will be used to load the annotations. If is “naive”, annotations will be parsed from the lines read from the header files.

Returns:

ann_dict – The annotations with items listed in self.ann_items.

Return type:

dict or str

load_raw_data(rec: str | int, backend: str = 'scipy') ndarray[source]

Load raw data from corresponding files with no further processing, in order to facilitate feeding data into the run_12ECG_classifier function

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • backend ({"scipy", "wfdb"}, optional) – The backend data reader, by default “scipy”. Note that “scipy” provides data in the format of “lead_first”, while “wfdb” provides data in the format of “lead_last”.

Returns:

raw_data – Raw data (d_signal) loaded from corresponding data file, without digital-to-analog conversion (DAC) and resampling.

Return type:

numpy.ndarray

load_resampled_data(rec: str | int, data_format: str = 'channel_first', siglen: int | None = None) ndarray[source]

Resample the data of rec to 500Hz, or load the resampled data in 500Hz, if the corr. data file already exists

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • data_format (str, default "channel_first") – Format of the ECG data, “channel_last” (alias “lead_last”), or “channel_first” (alias “lead_first”).

  • siglen (int, optional) – Signal length, with units in number of samples. If is not None, signal with length longer will be sliced to the length of siglen. Used for preparing/doing model training for example.

Returns:

2D resampled (and perhaps sliced 3D) signal data.

Return type:

numpy.ndarray

plot(rec: str | int, data: ndarray | None = None, ann: Dict[str, ndarray] | None = None, ticks_granularity: int = 0, leads: str | List[str] | None = None, same_range: bool = False, waves: Dict[str, Sequence[int]] | None = None, **kwargs: Any) None[source]

Plot the signals of a record or external signals (units in μV), with metadata (fs, labels, tranche, etc.), possibly also along with wave delineations.

Parameters:
  • rec (str or int) – Record name or index of the record in all_records.

  • data (numpy.ndarray, optional) – (12-lead) ECG signal to plot, should be of the format “channel_first”, and compatible with leads. If is not None, data of rec will not be used. This is useful when plotting filtered data.

  • ann (dict, optional) – Annotations for data, with 2 items: “scored”, “all”. Ignored if data is None.

  • ticks_granularity (int, default 0) – Granularity to plot axis ticks, the higher the more ticks. 0 (no ticks) –> 1 (major ticks) –> 2 (major + minor ticks)

  • leads (str or List[str], optional) – The leads of the ECG signal to plot.

  • same_range (bool, default False) – If True, all leads are forced to have the same y range.

  • waves (dict, optional) – Indices of the wave critical points, including “p_onsets”, “p_peaks”, “p_offsets”, “q_onsets”, “q_peaks”, “r_peaks”, “s_peaks”, “s_offsets”, “t_onsets”, “t_peaks”, “t_offsets”.

  • kwargs (dict, optional) – Additional keyword arguments to pass to matplotlib.pyplot.plot().

TODO

  1. Slice too long records, and plot separately for each segment.

  2. Plot waves using matplotlib.pyplot.axvspan().

Note

Locator of plt has default MAXTICKS of 1000. If not modifying this number, at most 40 seconds of signal could be plotted once.

Contributors: Jeethan, and WEN Hao

property url: List[str]

URL of the database index page for downloading.