mwptoolkit.module.Layer.layers¶
- class mwptoolkit.module.Layer.layers.GenVar(dim_encoder_state, dim_context, dim_attn_hidden=256, dropout_rate=0.5)[source]¶
Bases:
Module
Module to generate variable embedding.
- Parameters
dim_encoder_state (int) – Dimension of the last cell state of encoder RNN (output of Encoder module).
dim_context (int) – Dimension of RNN in GenVar module.
dim_attn_hidden (int) – Dimension of hidden layer in attention.
dim_mlp_hiddens (int) – Dimension of hidden layers in the MLP that transform encoder state to query of attention.
dropout_rate (int) – Dropout rate for attention and MLP.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(encoder_state, context, context_lens)[source]¶
Generate embedding for an unknown variable.
- Parameters
encoder_state (torch.FloatTensor) – Last cell state of the encoder (output of Encoder module).
context (torch.FloatTensor) – Encoded context, with size [batch_size, text_len, dim_hidden].
- Returns
Embedding of an unknown variable, with size [batch_size, dim_context]
- Return type
torch.FloatTensor
- training: bool¶
- class mwptoolkit.module.Layer.layers.Transformer(dim_hidden)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(top2)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.layers.TreeAttnDecoderRNN(hidden_size, embedding_size, input_size, output_size, n_layers=2, dropout=0.5)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(input_seq, last_hidden, encoder_outputs, seq_mask)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶