espnet2.lm.transformer_lm.TransformerLM
Less than 1 minute
espnet2.lm.transformer_lm.TransformerLM
class espnet2.lm.transformer_lm.TransformerLM(vocab_size: int, pos_enc: str | None = None, embed_unit: int = 128, att_unit: int = 256, head: int = 2, unit: int = 1024, layer: int = 4, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.1)
Bases: AbsLM
Initializes internal Module state, shared by both nn.Module and ScriptModule.
batch_score(ys: Tensor, states: List[Any], xs: Tensor) → Tuple[Tensor, List[Any]]
Score new token batch.
- Parameters:
- ys (torch.Tensor) – torch.int64 prefix tokens (n_batch, ylen).
- states (List *[*Any ]) – Scorer states for prefix tokens.
- xs (torch.Tensor) – The encoder feature that generates ys (n_batch, xlen, n_feat).
- Returns: Tuple of : batchfied scores for next token with shape of (n_batch, vocab_size) and next state list for ys.
- Return type: tuple[torch.Tensor, List[Any]]
forward(input: Tensor, hidden: None) → Tuple[Tensor, None]
Compute LM loss value from buffer sequences.
- Parameters:
- input (torch.Tensor) – Input ids. (batch, len)
- hidden (torch.Tensor) – Target ids. (batch, len)
score(y: Tensor, state: Any, x: Tensor) → Tuple[Tensor, Any]
Score new token.
- Parameters:
- y (torch.Tensor) – 1D torch.int64 prefix tokens.
- state – Scorer state for prefix tokens
- x (torch.Tensor) – encoder feature that generates ys.
- Returns: Tuple of : torch.float32 scores for next token (vocab_size) and next state for ys
- Return type: tuple[torch.Tensor, Any]