mwptoolkit.module.Encoder.rnn_encoder

class mwptoolkit.module.Encoder.rnn_encoder.BasicRNNEncoder(embedding_size, hidden_size, num_layers, rnn_cell_type, dropout_ratio, bidirectional=True, batch_first=True)[source]

Bases: Module

Basic Recurrent Neural Network (RNN) encoder.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input_embeddings, input_length, hidden_states=None)[source]

Implement the encoding process.

Parameters
  • input_embeddings (torch.Tensor) – source sequence embedding, shape: [batch_size, sequence_length, embedding_size].

  • input_length (torch.Tensor) – length of input sequence, shape: [batch_size].

  • hidden_states (torch.Tensor) – initial hidden states, default: None.

Returns

output features, shape: [batch_size, sequence_length, num_directions * hidden_size]. hidden states, shape: [batch_size, num_layers * num_directions, hidden_size].

Return type

tuple(torch.Tensor, torch.Tensor)

init_hidden(input_embeddings)[source]

Initialize initial hidden states of RNN.

Parameters

input_embeddings (torch.Tensor) – input sequence embedding, shape: [batch_size, sequence_length, embedding_size].

Returns

the initial hidden states.

Return type

torch.Tensor

training: bool
class mwptoolkit.module.Encoder.rnn_encoder.GroupAttentionRNNEncoder(emb_size=100, hidden_size=128, n_layers=1, bidirectional=False, rnn_cell=None, rnn_cell_name='gru', variable_lengths=True, d_ff=2048, dropout=0.3, N=1)[source]

Bases: Module

Group Attentional Recurrent Neural Network (RNN) encoder.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(embedded, input_var, split_list, input_lengths=None)[source]
Parameters
  • embedded (torch.Tensor) – embedded inputs, shape [batch_size, sequence_length, embedding_size].

  • input_var (torch.Tensor) – source sequence, shape [batch_size, sequence_length].

  • split_list (list) – group split index.

  • input_lengths (torch.Tensor) – length of input sequence, shape: [batch_size].

Returns

output features, shape: [batch_size, sequence_length, num_directions * hidden_size]. hidden states, shape: [batch_size, num_layers * num_directions, hidden_size].

Return type

tuple(torch.Tensor, torch.Tensor)

training: bool
class mwptoolkit.module.Encoder.rnn_encoder.HWCPEncoder(embedding_model, embedding_size, hidden_size=512, span_size=0, dropout_ratio=0.4)[source]

Bases: Module

Hierarchical word-clause-problem encoder

Initializes internal Module state, shared by both nn.Module and ScriptModule.

bi_combine(output, hidden)[source]
clause_level_forward(word_output, tree_batch)[source]
dependency_encode(word_output, node)[source]
forward(input_var, input_lengths, span_length, tree=None, output_all_layers=False)[source]

Not implemented

get_mask(encode_lengths, pad_length)[source]
problem_level_forword(span_input, span_mask)[source]
training: bool
word_level_forward(embedding_inputs, input_length, bi_word_hidden=None)[source]
class mwptoolkit.module.Encoder.rnn_encoder.SalignedEncoder(dim_embed, dim_hidden, dim_last, dropout_rate, dim_attn_hidden=256)[source]

Bases: Module

Simple RNN encoder with attention which also extract variable embedding.

Parameters
  • dim_embed (int) – Dimension of input embedding.

  • dim_hidden (int) – Dimension of encoder RNN.

  • dim_last (int) – Dimension of the last state will be transformed to.

  • dropout_rate (float) – Dropout rate.

forward(inputs, lengths, constant_indices)[source]
Parameters
  • inputs (torch.Tensor) – Indices of words, shape [batch_size, sequence_length].

  • length (torch.Tensor) – Length of inputs, shape [batch_size].

  • constant_indices (list of int) – Each list contains list.

Returns

Encoded sequence, shape [batch_size, sequence_length, hidden_size].

Return type

torch.Tensor

get_fix_constant()[source]
initialize_fix_constant(con_len, device)[source]
training: bool
class mwptoolkit.module.Encoder.rnn_encoder.SelfAttentionRNNEncoder(embedding_size, hidden_size, context_size, num_layers, rnn_cell_type, dropout_ratio, bidirectional=True)[source]

Bases: Module

Self Attentional Recurrent Neural Network (RNN) encoder.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input_embeddings, input_length, hidden_states=None)[source]

Implement the encoding process.

Parameters
  • input_embeddings (torch.Tensor) – source sequence embedding, shape: [batch_size, sequence_length, embedding_size].

  • input_length (torch.Tensor) – length of input sequence, shape: [batch_size].

  • hidden_states (torch.Tensor) – initial hidden states, default: None.

Returns

output features, shape: [batch_size, sequence_length, num_directions * hidden_size]. hidden states, shape: [batch_size, num_layers * num_directions, hidden_size].

Return type

tuple(torch.Tensor, torch.Tensor)

init_hidden(input_embeddings)[source]

Initialize initial hidden states of RNN.

Parameters

input_embeddings (torch.Tensor) – input sequence embedding, shape: [batch_size, sequence_length, embedding_size].

Returns

the initial hidden states.

Return type

torch.Tensor

training: bool