espnet.nets.pytorch_backend.rnn.attentions.NoAtt
Less than 1 minute
espnet.nets.pytorch_backend.rnn.attentions.NoAtt
class espnet.nets.pytorch_backend.rnn.attentions.NoAtt
Bases: Module
No attention
Initializes internal Module state, shared by both nn.Module and ScriptModule.
forward(enc_hs_pad, enc_hs_len, dec_z, att_prev)
NoAtt forward
- Parameters:
- enc_hs_pad (torch.Tensor) – padded encoder hidden state (B, T_max, D_enc)
- enc_hs_len (list) – padded encoder hidden state length (B)
- dec_z (torch.Tensor) – dummy (does not use)
- att_prev (torch.Tensor) – dummy (does not use)
- Returns: attention weighted encoder state (B, D_enc)
- Return type: torch.Tensor
- Returns: previous attention weights
- Return type: torch.Tensor
reset()
reset states