mwptoolkit.model.Graph2Tree.multiencdec¶
- class mwptoolkit.model.Graph2Tree.multiencdec.MultiEncDec(config, dataset)[source]¶
Bases:
Module
- Reference:
Shen et al. “Solving Math Word Problems with Multi-Encoders and Multi-Decoders” in COLING 2020.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- attn_decoder_forward(encoder_outputs, seq_mask, decoder_hidden, num_stack, target=None, output_all_layers=False)[source]¶
- calculate_loss(batch_data: dict) float [source]¶
Finish forward-propagating, calculating loss and back-propagation.
- Parameters
batch_data – one batch data.
- Returns
loss value.
batch_data should include keywords ‘input1’, ‘input2’, ‘output1’, ‘output2’, ‘input1 len’, ‘parse graph’, ‘num stack’, ‘output1 len’, ‘output2 len’, ‘num size’, ‘num pos’, ‘num order’
- decoder_forward(encoder_outputs, problem_output, attn_decoder_hidden, all_nums_encoder_outputs, seq_mask, num_mask, num_stack, target1, target2, output_all_layers)[source]¶
- encoder_forward(input1, input2, input_length, parse_graph, num_pos, num_pos_pad, num_order_pad, output_all_layers=False)[source]¶
- forward(input1, input2, input_length, num_size, num_pos, num_order, parse_graph, num_stack, target1=None, target2=None, output_all_layers=False)[source]¶
- Parameters
input1 (torch.Tensor) –
input2 (torch.Tensor) –
input_length (torch.Tensor) –
num_size (list) –
num_pos (list) –
num_order (list) –
parse_graph (torch.Tensor) –
num_stack (list) –
target1 (torch.Tensor | None) –
target2 (torch.Tensor | None) –
output_all_layers (bool) –
- Returns
- get_all_number_encoder_outputs(encoder_outputs, num_pos, batch_size, num_size, hidden_size)[source]¶
- model_test(batch_data: dict) Tuple[str, list, list] [source]¶
Model test.
- Parameters
batch_data – one batch data.
- Returns
result_type, predicted equation, target equation.
batch_data should include keywords ‘input1’, ‘input2’, ‘output1’, ‘output2’, ‘input1 len’, ‘parse graph’, ‘num stack’, ‘num pos’, ‘num order’, ‘num list’
- training: bool¶