Loader package

This package contain a set of map-style PyTorch datasets for the different models implemented in the tool. Through the use of this class, Deep-NILMtk implements a generic and memory-efficient data model that allows feeding batches of data to the deep neural network during model training. It thus offers a set of ready-to-use data generators with the goal of speeding up the research process and promoting re-usability of code.

bertdataset class

class deep_nilmtk.loader.bertdataset.BERTDataset(inputs, targets=None, params={})[source]

This class is dataLoader correponding to the BERT4NILM model. The original code can be found here: https://github.com/Yueeeeeeee/BERT4NILM/

The normalization of the target applainces is performed at this level just after the generation of the operational states using predefined thresholds for the considered appliances.

Parameters
  • inputs (np.array) -- The aggregate power.

  • targets (np.array, optional) -- The target appliance(s) power consumption, defaults to None

  • params (dict, optional) -- Hyper-parameter values, defaults to {}

The hyperparameter dictionnary is expected to include the following parameters

Parameters
  • threshold (List of floats) -- The threshold for states generation in the target power consumption, defaults to None

  • cutoff (List of floats) -- The cutoff for states generation in the target power consumption, defaults to None

  • min_on (List of floats) -- The min on duration for states generation in the target power consumption, defaults to None

  • min_off (List of floats) -- The min off duration for states generation in the target power consumption, defaults to None

  • in_size (int) -- The length of the input sequence, defaults to 488.

  • stride (int) -- The distance between two consecutive sequences, defaults to 1.

  • mask_prob (float) -- The masking probability used to generate sequences, defaults to 0.25

compute_status(data)[source]

Generates operational status for the target power considering specified parameters.

Parameters

data (np.array) -- Power consumption of target appliance

Returns

operational status

Return type

np.array

padding_seqs(in_array)[source]

This function pads sequences with length smaller then the sequence length specified during initialization

Parameters

in_array (np.array) -- Sequence of power consumption

Returns

Padded Sequence of power cosnumption

Return type

np.array

generaldataset class

class deep_nilmtk.loader.generaldataset.generalDataLoader(inputs, targets=None, params={})[source]

This class implements the most two common NILM data generators: The seq-to-seq and seq-to-point. The data is generated correponding to the sequence type. For the seq-to-point models, the position of the point can also be specified according to the point_positon paramter.

Parameters
  • inputs (np.array) -- The aggregate power.

  • targets (np.array, optional) -- The target appliance(s) power consumption, defaults to None

  • params (dict, optional) -- Hyper-parameter values, defaults to {}

The hyperparameter dictionnary is expected to include the following parameters

Parameters
  • in_size (list of floats) -- The input sequence length, defaults to 99

  • out_size (int) -- The target sequence length, defaults to 0.

  • point_position -- The position of the point for sequence, defaults to seq2quantile

  • quantiles -- The list of quantiles to generate, defaults to [0.1, 0.25, 0.5, 0.75, 0.90]

  • pad_at_begin (bool, optional) -- Specified how the padded values are inserted, defaults to False

get_sample(index)[source]

Generate a sample of power sequence.

Parameters

index (int) -- The start index of the first sequence

Returns

Aggregate power and target consumption during training and only aggreagte power during testing

Return type

np.array

deep_nilmtk.loader.generaldataset.pad_data(data, context_size, target_size, pad_at_begin=False)[source]

Performs data padding for both target and aggregate consumption

Parameters
  • data (np.array) -- The aggregate power

  • context_size (int) -- The input sequence length

  • target_size (int) -- The target sequence length

  • pad_at_begin (bool, optional) -- Specified how the padded values are inserted, defaults to False

Returns

The padded aggregate power.

Return type

np.array

tempooldataset class

class deep_nilmtk.loader.tempooldataset.TemPoolLoader(inputs, targets=None, params={})[source]

This class is the dataLoader for the temporal pooling NILM model. The original code can be found here: https://github.com/UCA-Datalab/nilm-thresholding

Parameters
  • inputs (np.array) -- The aggregate power.

  • targets (np.array, optional) -- The target appliance(s) power consumption, defaults to None

  • params (dict, optional) -- Hyper-parameter values, defaults to {}

The hyperparameter dictionnary is expected to include the following parameters

Parameters
  • in_size (int) -- The input sequence length, defaults to 99

  • border (int) -- The delay between the input and out sequence, defaults to 30.

deep_nilmtk.loader.tempooldataset.pad_data(data, units_to_pad, border=0)[source]

Performs data padding for both target and aggregate consumption

Parameters
  • data (np.array) -- The aggregate power

  • units_to_pad (tupe(int, int)) -- The numebr of values to add to input and output sequences.

  • border (int, optional) -- The delay between input and output sequence, defaults to 0

Returns

The padded aggregate power.

Return type

np.array

wavenetdataset class

class deep_nilmtk.loader.wavenetdataset.WaveNetDataLoader(inputs, targets=None, params={})[source]

This class is the dataLoader for the WaveNILM NILM model. The original code can be found here: https://github.com/jiejiang-jojo/fast-seq2point/

Parameters
  • inputs (np.array) -- The aggregate power.

  • targets (np.array, optional) -- The target appliance(s) power consumption, defaults to None

  • params (dict, optional) -- Hyper-parameter values, defaults to {}

The hyperparameter dictionnary is expected to include the following parameters

Parameters
  • in_size (int) -- The input sequence length, defaults to 99

  • kernel_size (int) -- The size of teh kernel, defaults to 3.

  • layers (int) -- The number of layers of the model, defaults to 6.

Note

This data loader generates target sequence cenetred in the input sequence with a length difference L between input and output sequence L = (2 ** layers - 1) * (kernel_size - 1) + 1

get_sample(index)[source]

Generate a sample of power sequence.

Parameters

index (int) -- The start index of the first sequence

Returns

Aggregate power and target consumption during training and only aggreagte power during testing

Return type

np.array

deep_nilmtk.loader.wavenetdataset.pad_data(data, context_size)[source]

Performs data padding for both target and aggregate consumption

Parameters
  • data (np.array) -- The aggregate power

  • context_size (int) -- The length of teh sequence.

Returns

The padded aggregate power.

Return type

np.array