mwptoolkit.module.Attention.seq_attention

class mwptoolkit.module.Attention.seq_attention.Attention(dim_value, dim_query, dim_hidden=256, dropout_rate=0.5)[source]

Bases: Module

Calculate attention

Parameters
  • dim_value (int) – Dimension of value.

  • dim_query (int) – Dimension of query.

  • dim_hidden (int) – Dimension of hidden layer in attention calculation.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(value, query, lens)[source]

Generate variable embedding with attention.

Parameters
  • query (FloatTensor) – Current hidden state, with size [batch_size, dim_query].

  • value (FloatTensor) – Sequence to be attented, with size [batch_size, seq_len, dim_value].

  • lens (list of int) – Lengths of values in a batch.

Returns

Calculated attention, with size [batch_size, dim_value].

Return type

FloatTensor

training: bool
class mwptoolkit.module.Attention.seq_attention.MaskedRelevantScore(dim_value, dim_query, dim_hidden=256, dropout_rate=0.0)[source]

Bases: Module

Relevant score masked by sequence lengths.

Parameters
  • dim_value (int) – Dimension of value.

  • dim_query (int) – Dimension of query.

  • dim_hidden (int) – Dimension of hidden layer in attention calculation.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(value, query, lens)[source]

Choose candidate from candidates.

Parameters
  • query (torch.FloatTensor) – Current hidden state, with size [batch_size, dim_query].

  • value (torch.FloatTensor) – Sequence to be attented, with size [batch_size, seq_len, dim_value].

  • lens (list of int) – Lengths of values in a batch.

Returns

Activation for each operand, with size [batch, max([len(os) for os in operands])].

Return type

torch.Tensor

training: bool
class mwptoolkit.module.Attention.seq_attention.RelevantScore(dim_value, dim_query, hidden1, dropout_rate=0)[source]

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(value, query)[source]
Parameters
  • value (torch.FloatTensor) – shape [batch, seq_len, dim_value].

  • query (torch.FloatTensor) – shape [batch, dim_query].

training: bool
class mwptoolkit.module.Attention.seq_attention.SeqAttention(hidden_size, context_size)[source]

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs, encoder_outputs, mask)[source]
Parameters
  • inputs (torch.Tensor) – shape [batch_size, 1, hidden_size].

  • encoder_outputs (torch.Tensor) – shape [batch_size, sequence_length, hidden_size].

Returns

output, shape [batch_size, 1, context_size]. attention, shape [batch_size, 1, sequence_length].

Return type

tuple(torch.Tensor, torch.Tensor)

training: bool