mwptoolkit.model.Seq2Seq.saligned

class mwptoolkit.model.Seq2Seq.saligned.Saligned(config, dataset)[source]

Bases: Module

Reference:

Chiang et al. “Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems”.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

calculate_loss(batch_data: dict) float[source]

Finish forward-propagating, calculating loss and back-propagation.

Parameters

batch_data – one batch data.

Returns

loss value.

batch_data should include keywords ‘question’, ‘ques len’, ‘equation’, ‘equ len’, ‘num pos’, ‘num list’, ‘num size’.

convert_idx2symbol(output, num_list)[source]
convert_mask_num(batch_output, num_list)[source]
decoder_forward(encoder_outputs, encoder_hidden, inputs_length, operands, stacks, number_emb, target=None, target_length=None, output_all_layers=False)[source]
encoder_forward(seq_emb, seq_length, constant_indices, output_all_layers=False)[source]
forward(seq, seq_length, number_list, number_position, number_size, target=None, target_length=None, output_all_layers=False) Tuple[Tuple[Tensor, Tensor], Tensor, Dict[str, Any]][source]
Parameters
  • seq (torch.Tensor) –

  • seq_length (torch.Tensor) –

  • number_list (list) –

  • number_position (list) –

  • number_size (list) –

  • target (torch.Tensor | None) –

  • target_length (torch.Tensor | None) –

  • output_all_layers (bool) –

Returns

token_logits:[batch_size, output_length, output_size], symbol_outputs:[batch_size,output_length], model_all_outputs.

Return type

tuple(torch.Tensor, torch.Tensor, dict)

model_test(batch_data: dict) tuple[source]

Model test.

Parameters

batch_data – one batch data.

Returns

predicted equation, target equation.

batch_data should include keywords ‘question’, ‘ques len’, ‘equation’, ‘equ len’, ‘num pos’, ‘num list’, ‘num size’.

predict(batch_data: dict, output_all_layers=False)[source]

predict samples without target.

Parameters
  • batch_data (dict) – one batch data.

  • output_all_layers (bool) – return all layer outputs of model.

Returns

token_logits, symbol_outputs, all_layer_outputs

training: bool