espnet.nets.pytorch_backend.lm.seq_rnn.SequentialRNNLM
Less than 1 minute
espnet.nets.pytorch_backend.lm.seq_rnn.SequentialRNNLM
class espnet.nets.pytorch_backend.lm.seq_rnn.SequentialRNNLM(n_vocab, args)
Bases: LMInterface
, Module
Sequential RNNLM.
SEE ALSO
Initialize class.
- Parameters:
- n_vocab (int) – The size of the vocabulary
- args (argparse.Namespace) – configurations. see py:method:add_arguments
static add_arguments(parser)
Add arguments to command line argument parser.
forward(x, t)
Compute LM loss value from buffer sequences.
- Parameters:
- x (torch.Tensor) – Input ids. (batch, len)
- t (torch.Tensor) – Target ids. (batch, len)
- Returns: Tuple of : loss to backward (scalar), negative log-likelihood of t: -log p(t) (scalar) and the number of elements in x (scalar)
- Return type: tuple[torch.Tensor, torch.Tensor, torch.Tensor]
Notes
The last two return values are used in perplexity: p(t)^{-n} = exp(-log p(t) / n)
init_state(x)
Get an initial state for decoding.
- Parameters:x (torch.Tensor) – The encoded feature tensor
Returns: initial state
score(y, state, x)
Score new token.
- Parameters:
- y (torch.Tensor) – 1D torch.int64 prefix tokens.
- state – Scorer state for prefix tokens
- x (torch.Tensor) – 2D encoder feature that generates ys.
- Returns: Tuple of : torch.float32 scores for next token (n_vocab) and next state for ys
- Return type: tuple[torch.Tensor, Any]