mwptoolkit.module.Layer.tree_layers¶
- class mwptoolkit.module.Layer.tree_layers.DQN(input_size, embedding_size, hidden_size, output_size, dropout_ratio)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(inputs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.Dec_LSTM(embedding_size, hidden_size, dropout_ratio)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, prev_c, prev_h, parent_h, sibling_state)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.DecomposeModel(hidden_size, dropout, device)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(node_stacks, tree_stacks, nodes_context, labels_embedding, pad_node=True)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.GateNN(hidden_size, input1_size, input2_size=0, dropout=0.4, single_layer=False)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(hidden, input1, input2=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.GenerateNode(hidden_size, op_nums, embedding_size, dropout=0.5)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(node_embedding, node_label, current_context)[source]¶
- Parameters
node_embedding (torch.Tensor) – node embedding, shape [batch_size, hidden_size].
node_label (torch.Tensor) – representation of node label, shape [batch_size, embedding_size].
current_context (torch.Tensor) – current context, shape [batch_size, hidden_size].
- Returns
l_child, representation of left child, shape [batch_size, hidden_size]. r_child, representation of right child, shape [batch_size, hidden_size]. node_label_, representation of node label, shape [batch_size, embedding_size].
- Return type
tuple(torch.Tensor, torch.Tensor, torch.Tensor)
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.Merge(hidden_size, embedding_size, dropout=0.5)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(node_embedding, sub_tree_1, sub_tree_2)[source]¶
- Parameters
node_embedding (torch.Tensor) – node embedding, shape [1, embedding_size].
sub_tree_1 (torch.Tensor) – representation of sub tree 1, shape [1, hidden_size].
sub_tree_2 (torch.Tensor) – representation of sub tree 2, shape [1, hidden_size].
- Returns
representation of merged tree, shape [1, hidden_size].
- Return type
torch.Tensor
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.NodeEmbeddingLayer(op_nums, embedding_size)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(node_embedding, node_label, current_context)[source]¶
- Parameters
node_embedding (torch.Tensor) – node embedding, shape [batch_size, num_directions * hidden_size].
node_label (torch.Tensor) – shape [batch_size].
- Returns
l_child, representation of left child, shape [batch_size, num_directions * hidden_size]. r_child, representation of right child, shape [batch_size, num_directions * hidden_size]. node_label_, representation of node label, shape [batch_size, embedding_size].
- Return type
tuple(torch.Tensor, torch.Tensor, torch.Tensor)
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.NodeEmbeddingNode(node_hidden, node_context=None, label_embedding=None)[source]¶
Bases:
object
- class mwptoolkit.module.Layer.tree_layers.NodeGenerater(hidden_size, op_nums, embedding_size, dropout=0.5)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(node_embedding, node_label, current_context)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.Prediction(hidden_size, op_nums, input_size, dropout=0.5)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(node_stacks, left_childs, encoder_outputs, num_pades, padding_hidden, seq_mask, mask_nums)[source]¶
- Parameters
node_stacks (list) – node stacks.
left_childs (list) – representation of left childs.
encoder_outputs (torch.Tensor) – output from encoder, shape [sequence_length, batch_size, hidden_size].
num_pades (torch.Tensor) – number representation, shape [batch_size, number_size, hidden_size].
padding_hidden (torch.Tensor) – padding hidden, shape [1,hidden_size].
seq_mask (torch.BoolTensor) – sequence mask, shape [batch_size, sequence_length].
mask_nums (torch.BoolTensor) – number mask, shape [batch_size, number_size].
- Returns
num_score, number score, shape [batch_size, number_size]. op, operator score, shape [batch_size, operator_size]. current_node, current node representation, shape [batch_size, 1, hidden_size]. current_context, current context representation, shape [batch_size, 1, hidden_size]. embedding_weight, embedding weight, shape [batch_size, number_size, hidden_size].
- Return type
tuple(torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor)
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.RecursiveNN(emb_size, op_size, op_list)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(expression_tree, num_embedding, look_up, out_idx2symbol)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.Score(input_size, hidden_size)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(hidden, num_embeddings, num_mask=None)[source]¶
- Parameters
hidden (torch.Tensor) – hidden representation, shape [batch_size, 1, hidden_size + input_size].
num_embeddings (torch.Tensor) – number embedding, shape [batch_size, number_size, hidden_size].
num_mask (torch.BoolTensor) – number mask, shape [batch_size, number_size].
- Returns
shape [batch_size, number_size].
- Return type
score (torch.Tensor)
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.ScoreModel(hidden_size)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(hidden, context, token_embeddings)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.SemanticAlignmentModule(encoder_hidden_size, decoder_hidden_size, hidden_size, batch_first=False)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(decoder_hidden, encoder_outputs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.SubTreeMerger(hidden_size, embedding_size, dropout=0.5)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(node_embedding, sub_tree_1, sub_tree_2)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.TreeAttention(input_size, hidden_size)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(hidden, encoder_outputs, seq_mask=None)[source]¶
- Parameters
hidden (torch.Tensor) – hidden representation, shape [1, batch_size, hidden_size]
encoder_outputs (torch.Tensor) – output from encoder, shape [sequence_length, batch_size, hidden_size].
seq_mask (torch.Tensor) – sequence mask, shape [batch_size, sequence_length].
- Returns
attention energies, shape [batch_size, 1, sequence_length].
- Return type
attn_energies (torch.Tensor)
- training: bool¶
- class mwptoolkit.module.Layer.tree_layers.TreeEmbedding(embedding, terminal=False)[source]¶
Bases:
object
- class mwptoolkit.module.Layer.tree_layers.TreeEmbeddingModel(hidden_size, op_set, dropout=0.4)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(class_embedding, tree_stacks, embed_node_index)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶