mwptoolkit.model.Seq2Seq.groupatt

class mwptoolkit.model.Seq2Seq.groupatt.GroupATT(config, dataset)[source]

Bases: Module

Reference:

Li et al. “Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions” in ACL 2019.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

calculate_loss(batch_data: dict) float[source]

Finish forward-propagating, calculating loss and back-propagation.

Parameters

batch_data – one batch data. batch_data should include keywords ‘question’, ‘ques len’, ‘equation’.

Returns

loss value.

convert_idx2symbol(output, num_list)[source]
convert_in_idx_2_out_idx(output)[source]
convert_out_idx_2_in_idx(output)[source]
decode(output)[source]
decoder_forward(encoder_outputs, encoder_hidden, decoder_inputs, target=None, output_all_layers=False)[source]
encoder_forward(seq_emb, seq, seq_length, output_all_layers=False)[source]
forward(seq, seq_length, target=None, output_all_layers=False) Tuple[Tensor, Tensor, Dict[str, Any]][source]
Parameters
  • seq (torch.Tensor) – input sequence, shape: [batch_size, seq_length].

  • seq_length (torch.Tensor) – the length of sequence, shape: [batch_size].

  • target (torch.Tensor | None) – target, shape: [batch_size, target_length], default None.

  • output_all_layers (bool) – return output of all layers if output_all_layers is True, default False.

:return : token_logits:[batch_size, output_length, output_size], symbol_outputs:[batch_size,output_length], model_all_outputs. :rtype: tuple(torch.Tensor, torch.Tensor, dict)

init_decoder_inputs(target, device, batch_size)[source]
model_test(batch_data: dict) tuple[source]

Model test.

Parameters

batch_data – one batch data.

Returns

predicted equation, target equation.

batch_data should include keywords ‘question’, ‘ques len’, ‘equation’ and ‘num list’.

predict(batch_data: dict, output_all_layers=False)[source]

predict samples without target.

Parameters
  • batch_data (dict) – one batch data.

  • output_all_layers (bool) – return all layer outputs of model.

Returns

token_logits, symbol_outputs, all_layer_outputs

process_gap_encoder_decoder(encoder_hidden)[source]
training: bool