Model package¶
This package contain the networks implementation for the models available in Deep-NILMtk as well as one generic Lightning model that can work independently of the PyTorch model.
baselines module¶
- class deep_nilmtk.model.baselines.DAE(params)[source]¶
PyTorch implementation of the DAE NILM model as porposed in : https://dl.acm.org/doi/pdf/10.1145/3360322.3360844
- Parameters
params (dict) -- dictionnary of values relative to hyper-parameters.
Besides the additional paramter from teh parent model, the params dictionnay is expected to include the following keys:
- Parameters
feature_type (int) -- The number of input features, defaults to 1.
appliances (list of str) -- A list of appliances.
pool_filter (int) -- The size of pooling filter, defaults to 50.
latent_size (int) -- The number of nodes in the last layer, defaults to 1024.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.baselines.RNN(params)[source]¶
PyTorch implementation of the RNN NILM model as porposed in : https://dl.acm.org/doi/pdf/10.1145/3360322.3360844
- Parameters
params (dict) -- dictionnary of values relative to hyper-parameters.
Besides the additional paramter from teh parent model, the params dictionnay is expected to include the following keys:
- Parameters
feature_type (int) -- The number of input features, defaults to 1.
appliances (list of str) -- A list of appliances.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.baselines.S2P(params)[source]¶
This class is an abstract class for Sequence to point models. By implementing this class, you can avoid to implement the predict and the forward functions.
- Parameters
params (dict) -- dictionnary of values relative to hyper-parameters.
The dictionnay is expected to include the following keys:
- Parameters
target_norm (str.) -- the type of normlization of the target power, defaults to 'z-norm'.
mean (float.) -- The mean consumption value of the target appliance, defaults to 0.
std (float.) -- The std consumption value the target power, defaults to 1
min (float.) -- The mininum consumption value of the target appliance, defaults to 0.
max (float.) -- The maximum consumption value the target power, defaults to 1
- predict(model, test_dataloader)[source]¶
Generate prediction during testing for the test_dataLoader
- Parameters
model -- pre-trained model.
test_dataloader (dataLoader) -- data loader for the testing period.
- Returns
Disaggregated power consumption.
- Return type
tensor
- step(batch)[source]¶
Disaggregates a batch of data
- Parameters
batch (Tensor) -- A batch of data
- Returns
loss function as returned form the model and the MAE as returned from the model.
- Return type
tuple(float, float)
- static suggest_hparams(self, trial)[source]¶
Function returning list of params that will be suggested from optuna
- Parameters
trial (optuna.trial) -- Optuna Trial.
- Returns
Parameters with values suggested by optuna
- Return type
dict
- training: bool¶
- class deep_nilmtk.model.baselines.S2S(params)[source]¶
This class is an abstract class for Sequence to Sequence models. By implementing this class, you can avoid to implement the predict and the forward functions.
- Parameters
params (dict) -- dictionnary of values relative to hyper-parameters.
The dictionnay is expected to include the following keys:
- Parameters
target_norm (str.) -- the type of normlization of the target power, defaults to 'z-norm'.
mean (float.) -- The mean consumption value of the target appliance, defaults to 0.
std (float.) -- The std consumption value the target power, defaults to 1
min (float.) -- The mininum consumption value of the target appliance, defaults to 0.
max (float.) -- The maximum consumption value the target power, defaults to 1
- aggregate_seqs(prediction)[source]¶
Aggregate the overleapping sequences using the mean
- Parameters
prediction (tensor) -- test predictions of the current model
- Returns
Aggregted sequence
- Return type
tensor
- predict(model, test_dataloader)[source]¶
Generate prediction during testing for the test_dataLoader
- Parameters
model -- pre-trained model.
test_dataloader -- data loader for the testing period.
- Returns
data loader for the testing period.
- Return type
tensor
- step(batch)[source]¶
Disaggregates a batch of data
- Parameters
batch (Tensor) -- A batch of data.
- Returns
loss function as returned form the model and MAE as returned from the model.
- Return type
tuple(float,float)
- training: bool¶
- class deep_nilmtk.model.baselines.Seq2Point(params)[source]¶
PyTorch implementation of the Seq-to-point NILM model as porposed in : https://dl.acm.org/doi/pdf/10.1145/3360322.3360844
- Parameters
params (dict) -- dictionnary of values relative to hyper-parameters.
Besides the additional paramter from teh parent model, the params dictionnay is expected to include the following keys:
- Parameters
feature_type (int) -- The number of input features, defaults to 1.
appliances (list of str) -- A list of appliances.
pool_filter (int) -- The size of pooling filter, defaults to 50.
latent_size (int) -- The number of nodes in the last layer, defaults to 1024.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.baselines.Seq2Seq(params)[source]¶
PyTorch implementation of the Window-GRU NILM model as porposed in : https://dl.acm.org/doi/pdf/10.1145/3360322.3360844
- Parameters
params (dict) -- dictionnary of values relative to hyper-parameters.
Besides the additional paramter from teh parent model, the params dictionnay is expected to include the following keys:
- Parameters
feature_type (int) -- The number of input features, defaults to 1.
appliances (list of str) -- A list of appliances.
pool_filter (int) -- The size of pooling filter, defaults to 50.
latent_size (int) -- The number of nodes in the last layer, defaults to 1024.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.baselines.WindowGRU(params)[source]¶
PyTorch implementation of the Window-GRU NILM model as porposed in : https://dl.acm.org/doi/pdf/10.1145/3360322.3360844
- Parameters
params (dict) -- dictionnary of values relative to hyper-parameters.
Besides the additional paramter from teh parent model, the params dictionnay is expected to include the following keys:
- Parameters
feature_type (int) -- The number of input features, defaults to 1.
appliances (list of str) -- A list of appliances.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
bert4nilm module¶
- class deep_nilmtk.model.bert4nilm.Attention[source]¶
Attention layer
- Parameters
query (tensor) -- Query values
key (tensor) -- Key values
value (tensor) -- Values
mask (tensor, optional) -- Mask for a causal model, defaults to None
dropout (float, optional) -- Dropout, defaults to None
- Returns
output of the attention layer and attention score
- Return type
tuple(tensor, tensor)
- forward(query, key, value, mask=None, dropout=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.bert4nilm.BERT4NILM(params)[source]¶
BERT4NILM implementation. Original paper can be found here: https://dl.acm.org/doi/pdf/10.1145/3427771.3429390 Original code can be found here: https://github.com/Yueeeeeeee/BERT4NILM
The hyperparameter dictionnary is expected to include the following parameters
- Parameters
threshold (List of floats) -- The threshold for states generation in the target power consumption, defaults to None
cutoff (List of floats) -- The cutoff for states generation in the target power consumption, defaults to None
min_on (List of floats) -- The min on duration for states generation in the target power consumption, defaults to None
min_off (List of floats) -- The min off duration for states generation in the target power consumption, defaults to None
in_size (int) -- The length of the input sequence, defaults to 488.
stride (int) -- The distance between two consecutive sequences, defaults to 1.
hidden (int) -- The hidden size, defaults to 256
heads (int) -- The number of attention heads in each transformer block, defaults to 2
n_layers (int) -- the number of transformer blocks in the model, defaults to 2
- Params dropout
The dropout, defaults to 0.2
it can be used as follow:
'Bert4NILM': NILMExperiment({ "model_name": 'BERT4NILM', 'in_size': 480, 'feature_type':'main', 'stride':10, 'max_nb_epochs':1, 'cutoff':{ 'aggregate': 6000, 'kettle': 3100, 'fridge': 300, 'washing machine': 2500, 'microwave': 3000, 'dishwasher': 2500 }, 'threshold':{ 'kettle': 2000, 'fridge': 50, 'washing machine': 20, 'microwave': 200, 'dishwasher': 10 }, 'min_on':{ 'kettle': 2, 'fridge': 10, 'washing machine': 300, 'microwave': 2, 'dishwasher': 300 }, 'min_off':{ 'kettle': 0, 'fridge': 2, 'washing machine': 26, 'microwave': 5, 'dishwasher': 300 }, })
- aggregate_seqs(prediction)[source]¶
Aggregate the overleapping sequences using the mean taking into consideration the stride size
- Parameters
prediction (tensor) -- test predictions of the current model
- Returns
Aggregted sequence
- Return type
tensor
- compute_status(data)[source]¶
Calculates teh states for the target data based on the threshold
- Parameters
data (tensor) -- The target data
- Returns
The operational states
- Return type
tensor
- cutoff_energy(data)[source]¶
Removes the spikes from the data
- Parameters
data (tesnor) -- Power consumption
- Returns
Updated ower consumption
- Return type
tensor
- forward(sequence)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- predict(model, test_dataloader)[source]¶
Generates predictions for the test data loader
- Parameters
model (nn.Module) -- Pre-trained model
test_dataloader (dataLoader) -- The test data
- Returns
Generated predictions
- Return type
dict
- set_hpramas(cutoff, threshold, min_on, min_off)[source]¶
Setter for the hyper-parameters related to appliance state generation
- Parameters
cutoff (float) -- The power cutoff
threshold (float) -- Threshold of target power consumption
min_on (float) -- Minimum on duration
min_off (float) -- Minimum off duration
- step(batch, seq_type=None)[source]¶
Disaggregates a batch of data
- Parameters
batch (Tensor) -- A batch of data.
- Returns
loss function as returned form the model and MAE as returned from the model.
- Return type
tuple(float,float)
- training: bool¶
- class deep_nilmtk.model.bert4nilm.LayerNorm(features, eps=1e-06)[source]¶
Normalization layer
- Parameters
features (int) -- The number of input features
eps (float, optional) -- Regularization factor, defaults to 1e-6
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.bert4nilm.MultiHeadedAttention(h, d_model, dropout=0.1)[source]¶
Multi headed attention layer
- Parameters
h (int) -- The number of heads
d_model (int) -- The dimension of the model
dropout (float, optional) -- Dropout, defaults to 0.1
- forward(query, key, value, mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.bert4nilm.PositionalEmbedding(max_len, d_model)[source]¶
Positional Embedding
- Parameters
max_len (int) -- maximum length of the input
d_model (int) -- dimension of the model
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.bert4nilm.PositionwiseFeedForward(d_model, d_ff)[source]¶
Calculates the position wise feed forward
- Parameters
d_model (int) -- The dimension of the model
d_ff (int) -- size of hidden layer
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.bert4nilm.SublayerConnection(size, dropout)[source]¶
Performs the addition and layer normalisation More details can be found https://arxiv.org/pdf/1706.03762.pdf
- Parameters
size (int) -- the size of teh input
dropout (float) -- Dropout
- forward(x, sublayer)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.bert4nilm.TransformerBlock(hidden, attn_heads, feed_forward_hidden, dropout)[source]¶
Tranformer decoder block.
- Parameters
hidden (int) -- Dimension of the model
attn_heads (int) -- The number of attention heads
feed_forward_hidden (int) -- The hidden size of feedforward layer
dropout (float) -- Dropout
- forward(x, mask)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
tempool module¶
- class deep_nilmtk.model.tempool.Decoder(in_features=3, out_features=1, kernel_size=2, stride=2)[source]¶
Decoder block of the Temporal_pooling layer
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.tempool.Encoder(in_features=3, out_features=1, kernel_size=3, padding=1, stride=1, dropout=0.1)[source]¶
Decoder block of the Temporal_pooling layer
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.tempool.PTPNet(params)[source]¶
Source: https://github.com/lmssdd/TPNILM Check the paper Non-Intrusive Load Disaggregation by Convolutional Neural Network and Multilabel Classification by Luca Massidda, Marino Marrocu and Simone Manca
The hyperparameter dictionnary is expected to include the following parameters
The hyperparameter dictionnary is expected to include the following parameters
- Parameters
in_size (int) -- The input sequence length, defaults to 99
border (int) -- The delay between the input and out sequence, defaults to 30.
appliances (list) -- List of appliances
feature_type (str) -- The type of input features generated during pre-processing, defaults to 'main'.
init_features -- The number of features in the first encoder layer, defaults to 32.
dropout (float) -- Dropout
target_norm (str) -- The type of normalization of the target data, defeaults to 'z-norm'.
mean (float) -- The mean consumption of the target power, defaults to 0
std (float) -- The STD consumption of the target power, defaults to 1
It can be used as follows:
- aggregate_seqs(prediction, states)[source]¶
Aggregates the overleapping sequences using the mean
- Parameters
prediction (tensor) -- test predictions of the current model with shape (n_samples + window_size -1 ,window_size)
- Returns
Aggregted sequence
- Return type
tensor
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- predict(model, test_dataloader)[source]¶
Generates predictions for the test data loader
- Parameters
model (nn.Module) -- Pre-trained model
test_dataloader (dataLoader) -- The test data
- Returns
Generated predictions
- Return type
dict
- step(batch, sequence_type=None)[source]¶
Disaggregates a batch of data
- Parameters
batch (Tensor) -- A batch of data.
- Returns
loss function as returned form the model and MAE as returned from the model.
- Return type
tuple(float,float)
- training: bool¶
- class deep_nilmtk.model.tempool.TemporalPooling(in_features=3, out_features=1, kernel_size=2, dropout=0.1)[source]¶
Temporal Pooling mechanism that combines data with different scales.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
unet module¶
- class deep_nilmtk.model.unet.UNETNILM(params)[source]¶
UNET-NILM impelementation The orginal paper can be found here: https://dl.acm.org/doi/abs/10.1145/3427771.3427859
The hyperparameter dictionnary is expected to include the following parameters
- Parameters
appliances (list) -- List of appliances, defaults to 1
feature_type (str) -- The type of input features generated in the pre-processing, defaults to 'main'
n_channels (int) -- the number of output channels, defaults to 1
pool_filter (int) -- Pooling filter, defaults to 8
latent_size (int) -- The latent size, defaults to 1024
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- predict(model, test_dataloader)[source]¶
Generate prediction during testing for the test_dataLoader
- Parameters
model -- pre-trained model.
test_dataloader (dataLoader) -- data loader for the testing period.
- Returns
Disaggregated power consumption.
- Return type
tensor
- step(batch)[source]¶
Disaggregates a batch of data
- Parameters
batch (Tensor) -- A batch of data.
- Returns
loss function as returned form the model and MAE as returned from the model.
- Return type
tuple(float,float)
- training: bool¶
- class deep_nilmtk.model.unet.UNETNILMSeq2Quantile(params)[source]¶
UNET-NILM impelementation with quantile regression The orginal paper can be found here: https://dl.acm.org/doi/abs/10.1145/3427771.3427859
The hyperparameter dictionnary is expected to include the following parameters
- Parameters
appliances (list) -- List of appliances, defaults to 1
feature_type (str) -- The type of input features generated in the pre-processing, defaults to 'main'
n_channels (int) -- the number of output channels, defaults to 1
pool_filter (int) -- Pooling filter, defaults to 8
latent_size (int) -- The latent size, defaults to 1024
quantile -- The quantiles to use during prediction, defaults to [0.1, 0.25, 0.5, 0.75, 0.9]
quantile -- list
It can be used as follows:
'UNETNiLMSeq2Q':NILMExperiment({ "model_name": 'UNETNiLMSeq2Quantile', 'in_size': 480, 'feature_type':'mains', 'input_norm':'z-norm', 'target_norm':'z-norm', 'kfolds':3, 'seq_type':'seq2quantile', 'max_nb_epochs':1 }),
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- predict(model, test_dataloader)[source]¶
Generate prediction during testing for the test_dataLoader
- Parameters
model -- pre-trained model.
test_dataloader (dataLoader) -- data loader for the testing period.
- Returns
Disaggregated power consumption.
- Return type
tensor
- smooth_pinball_loss(y, q, tau, alpha=0.01, kappa=1000.0, margin=0.01)[source]¶
The implementation of the Pinball loss for NILM, original code can be found in : https://github.com/hatalis/smooth-pinball-neural-network/blob/master/pinball_loss.py Hatalis, Kostas, et al. "A Novel Smoothed Loss and Penalty Function for Noncrossing Composite Quantile Estimation via Deep Neural Networks." arXiv preprint (2019).
- step(batch)[source]¶
Disaggregates a batch of data
- Parameters
batch (Tensor) -- A batch of data.
- Returns
loss function as returned form the model and MAE as returned from the model.
- Return type
tuple(float,float)
- training: bool¶
wavenet module¶
- class deep_nilmtk.model.wavenet.CNN3(seq_len=41, to_binary=False)[source]¶
- forward(input)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.wavenet.CNN5(seq_len=15, to_binary=False)[source]¶
- forward(input)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.wavenet.CNN7(seq_len=253, to_binary=False)[source]¶
- forward(input)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.wavenet.DilatedResidualBlock(residual_channels, dilation_channels, skip_channels, kernel_size, dilation, bias)[source]¶
- forward(data_in)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.wavenet.DilatedResidualBlock2(residual_channels, dilation_channels, skip_channels, kernel_size, dilation, bias)[source]¶
- forward(data_in)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.wavenet.WaveNet(params)[source]¶
WaveNet model for load disaggregtion using residual and dilated convolutions. This model a sequence to subsequence model where: Output_sequence_length = Input_sequence_length - L L = (2 ** layers - 1) * (kernel_size - 1) + 1
- forward(data_in)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.wavenet.WaveNetBGRU(layers=6, kernel_size=3, residual_channels=32, dilation_channels=32, skip_channels=32)[source]¶
WaveNet model with a GRU
- Parameters
nn ([type]) -- [description]
- Returns
[description]
- Return type
[type]
- forward(data_in)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class deep_nilmtk.model.wavenet.WaveNetBGRU_speedup(layers=6, kernel_size=3, residual_channels=32, dilation_channels=32, skip_channels=32)[source]¶
WaveNet with a GRU and faster generation fo predictions
- Parameters
nn ([type]) -- [description]
- Returns
[description]
- Return type
[type]
- forward(data_in)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
model_pil module¶
- class deep_nilmtk.model.model_pil.pilModel(net, hparams)[source]¶
Lightning module that is compatible with PyTorch models included in Deep-NILMtk.
- configure_optimizers()[source]¶
Choose what optimizers and learning-rate schedulers to use in your optimization.
- Returns:
two lists: a list of optimzer and a list of scheduler
- training: bool¶
- training_step(batch, batch_idx)[source]¶
Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
- Args:
- batch (
Tensor| (Tensor, ...) | [Tensor, ...]): The output of your
DataLoader. A tensor, tuple or list.
batch_idx (int): Integer displaying index of this batch optimizer_idx (int): When using multiple optimizers, this argument will also be present. hiddens(
Tensor): Passed in if- batch (
- Return:
Any of.
Tensor- The loss tensordict- A dictionary. Can include any keys, but must include the key'loss'None- Training will skip to the next batch
- Note:
Returning
Noneis currently not supported for multi-GPU or TPU, or with 16-bit precision enabled.
In this step you'd normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.
Example:
def training_step(self, batch, batch_idx): x, y, z = batch out = self.encoder(x) loss = self.loss(out, x) return loss
If you define multiple optimizers, this step will be called with an additional
optimizer_idxparameter.# Multiple optimizers (e.g.: GANs) def training_step(self, batch, batch_idx, optimizer_idx): if optimizer_idx == 0: # do training_step with encoder ... if optimizer_idx == 1: # do training_step with decoder ...
If you add truncated back propagation through time you will also get an additional argument with the hidden states of the previous step.
# Truncated back-propagation through time def training_step(self, batch, batch_idx, hiddens): # hiddens are the hidden states from the previous truncated backprop step ... out, hiddens = self.lstm(data, hiddens) ... return {"loss": loss, "hiddens": hiddens}
- Note:
The loss value shown in the progress bar is smoothed (averaged) over the last values, so it differs from the actual loss returned in train/validation step.
- validation_step(batch, batch_idx)[source]¶
Operates on a single batch of data from the validation set. In this step you'd might generate examples or calculate anything of interest like accuracy.
# the pseudocode for these calls val_outs = [] for val_batch in val_data: out = validation_step(val_batch) val_outs.append(out) validation_epoch_end(val_outs)
- Args:
- batch (
Tensor| (Tensor, ...) | [Tensor, ...]): The output of your
DataLoader. A tensor, tuple or list.
batch_idx (int): The index of this batch dataloader_idx (int): The index of the dataloader that produced this batch
(only if multiple val dataloaders used)
- batch (
- Return:
Any object or value
None- Validation will skip to the next batch
# pseudocode of order val_outs = [] for val_batch in val_data: out = validation_step(val_batch) if defined("validation_step_end"): out = validation_step_end(out) val_outs.append(out) val_outs = validation_epoch_end(val_outs)
# if you have one val dataloader: def validation_step(self, batch, batch_idx): ... # if you have multiple val dataloaders: def validation_step(self, batch, batch_idx, dataloader_idx): ...
Examples:
# CASE 1: A single validation dataset def validation_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'val_loss': loss, 'val_acc': val_acc})
If you pass in multiple val dataloaders,
validation_step()will have an additional argument.# CASE 2: multiple validation dataloaders def validation_step(self, batch, batch_idx, dataloader_idx): # dataloader_idx tells you which dataset this is. ...
- Note:
If you don't need to validate you don't need to implement this method.
- Note:
When the
validation_step()is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.