espnet.nets.pytorch_backend.transformer.embedding.ScaledPositionalEncoding
Less than 1 minute
espnet.nets.pytorch_backend.transformer.embedding.ScaledPositionalEncoding
class espnet.nets.pytorch_backend.transformer.embedding.ScaledPositionalEncoding(d_model, dropout_rate, max_len=5000)
Bases: PositionalEncoding
Scaled positional encoding module.
See Sec. 3.2 https://arxiv.org/abs/1809.08895
- Parameters:
- d_model (int) – Embedding dimension.
- dropout_rate (float) – Dropout rate.
- max_len (int) – Maximum input length.
Initialize class.
forward(x)
Add positional encoding.
- Parameters:x (torch.Tensor) – Input tensor (batch, time, *).
- Returns: Encoded tensor (batch, time, *).
- Return type: torch.Tensor
reset_parameters()
Reset parameters.