Disaggregate package

This package contains the definition of an abstract disaggregation class compatible with all PyTorch models defined in the config module and compatible with the API of NILMtk. It thus relieves the user from the burden of data formatting and interfacing with NILMtk.

Nilm_experiment class

class deep_nilmtk.disaggregate.nilm_experiment.NILMExperiment(params)[source]

This class defines a NILM experiment. It is compatibale with both single and multi-appliance models and offers different advanced features like cross-validation and hyper-parametrs optimization during the training phase. The class is independent of the deep model used for load disaggregation.

Note

For a PyTorch model to be compatible with this class, an entry should be added for this model in the config module.

disaggregate_chunk(test_main_list, do_preprocessing=True)[source]

Uses trained models to disaggregate the test_main_list. It is compatible with both single and multi-appliance models.

Parameters
  • test_main_list (Liste of pd.DataFrame) -- Aggregate power measurements.

  • do_preprocessing (bool, optional) -- Specify if pre-processing need to be done or not, defaults to True

Returns

Appliances power measurements.

Return type

list of pd.DataFrame

get_net_and_loaders()[source]

Returns an instance of the specified model and the correspanding dataloader

Returns

(model , dataloader)

Return type

tuple(nn.Module, torch.utils.data.Dataset)

multi_appliance_disaggregate(test_main_list, model=None, do_preprocessing=True)[source]
multi_appliance_fit()[source]

Train the specified models for each appliance separately taking into consideration the use of cross-validation and hyper-parameters optimisation. The checkpoints for each model are saved in the correspondng path.

objective(trial, train_loader=None, val_loader=None, fold_idx=None)[source]

The objective function to be used with optuna. This function requires the model under study to implement a static function called suggest_hparams() [see the model documentation for more informations]

Parameters
  • trial -- Optuna.trial

  • train_loader (DataLoader, optional) -- training dataLoader for the current experiment. Defaults to None.

  • val_loader (DataLoader, optional) -- validation dataLoader for the current experiment. Defaults to None.

  • fold_idx (int, optional) -- Number of the fold of cross-validation is used. Defaults to None.

Raises

Exception -- In case the model does not suggest any parameters.

Returns

The best validation loss aschieved

Return type

float

objective_cv(trial)[source]

The objective function for Optuna when cross-validation is also used

Parameters

trial (Optuna.Trial) -- An optuna trial

Returns

average of best loss validations for considered folds

Return type

float

partial_fit(mains, sub_main, do_preprocessing=True, **load_kwargs)[source]

Trains the model for appliances according to the model name specified in the experiment's definition. It starts with the data pre-processing and formatting and then train the model based on the type of the model(single or multi-task).

Parameters
  • mains (Liste of pd.DataFrame) -- Aggregate power measurements.

  • sub_main (Liste of pd.DataFrame) -- Appliances power measurements.

  • do_preprocessing (bool, optional) -- Performs pre-processing or not. Defaults to True., defaults to True

save_best_model(study, trial)[source]

Keeps track of the trial giving best results

Parameters
  • study -- Optuna study

  • trial -- Optuna trial

single_appliance_disaggregate(test_main_list, model=None, do_preprocessing=True)[source]

Perfroms load disaggregtaion for single appliance models. If Optuna was used during the training phase, it disaggregtaes the test_main_list using only the best trial. If cross-validation is used during training, it returns the average of predictions cross all folds for each applaince. In this later case, the predictions for each fold are also logged in the results folder under the name ['model_name']_[appliance_name]_all_folds_predictions.p. Alternatively, when both Optuna and cross-validation are used, it returns the average predictions of all folds for only the best trial.

Parameters
  • test_main_list (liste of pd.DataFrame) -- Aggregate power measurements

  • model (dict, optional) -- Pre-trained appliance's models. Defaults to None.

  • do_preprocessing (bool, optional) -- Specify if pre-processing need to be done or not, defaults to True

Returns

estimated power consumption of the considered appliances.

Return type

liste of dict

single_appliance_fit()[source]

Train the specified models for each appliance separately taking into consideration the use of cross-validation and hyper-parameters optimisation. The checkpoints for each model are saved in the correspondng path.

train_model(appliance_name, train_loader, val_loader, exp_name, mean=None, std=None, trial_idx=None, fold_idx=None, model=None)[source]

Trains a single PyTorch model.

Parameters
  • appliance_name (str) -- Name of teh appliance to be modeled

  • train_loader (DataLoader) -- training dataLoader for the current appliance

  • val_loader (DataLoader) -- validation dataLoader for the current appliance

  • exp_name (str) -- the name of the experiment

  • mean (float, optional) -- mean value of the target appliance power. Defaults to None.

  • std (float, optional) -- std value of the target applaince power. Defaults to None.

  • trial_idx (int, optional) -- ID of the current optuna trial if optuna is used. Defaults to None.

  • fold_idx (int, optional) -- the number of the fold if CV is used. Defaults to None.

  • model -- Lightning model of the current appliance. Defaults to None.

Returns

in the case of using Optuna, it return the best validation loss and the path to the best checkpoint.

Return type

tuple(int, str)