espnet.nets.pytorch_backend.transformer.embedding.PositionalEncoding
Less than 1 minute
espnet.nets.pytorch_backend.transformer.embedding.PositionalEncoding
class espnet.nets.pytorch_backend.transformer.embedding.PositionalEncoding(d_model, dropout_rate, max_len=5000, reverse=False)
Bases: Module
Positional encoding.
- Parameters:
- d_model (int) – Embedding dimension.
- dropout_rate (float) – Dropout rate.
- max_len (int) – Maximum input length.
- reverse (bool) – Whether to reverse the input position. Only for
- current (the class LegacyRelPositionalEncoding. We remove it in the)
- RelPositionalEncoding. (class)
Construct an PositionalEncoding object.
extend_pe(x)
Reset the positional encodings.
forward(x: Tensor)
Add positional encoding.
- Parameters:x (torch.Tensor) – Input tensor (batch, time, *).
- Returns: Encoded tensor (batch, time, *).
- Return type: torch.Tensor