espnet2.asr.encoder.transformer_encoder_multispkr.TransformerEncoder
espnet2.asr.encoder.transformer_encoder_multispkr.TransformerEncoder
class espnet2.asr.encoder.transformer_encoder_multispkr.TransformerEncoder(input_size: int, output_size: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, num_blocks_sd: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, input_layer: str | None = 'conv2d', pos_enc_class=<class 'espnet.nets.pytorch_backend.transformer.embedding.PositionalEncoding'>, normalize_before: bool = True, concat_after: bool = False, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 1, padding_idx: int = -1, num_inf: int = 1)
Bases: AbsEncoder
Transformer encoder module.
- Parameters:
- input_size – input dim
- output_size – dimension of attention
- attention_heads – the number of heads of multi head attention
- linear_units – the number of units of position-wise feed forward
- num_blocks – the number of recognition encoder blocks
- num_blocks_sd – the number of speaker dependent encoder blocks
- dropout_rate – dropout rate
- attention_dropout_rate – dropout rate in attention
- positional_dropout_rate – dropout rate after adding positional encoding
- input_layer – input layer type
- pos_enc_class – PositionalEncoding or ScaledPositionalEncoding
- normalize_before – whether to use layer_norm before the first block
- concat_after – whether to concat attention layer’s input and output if True, additional linear will be applied. i.e. x -> x + linear(concat(x, att(x))) if False, no additional linear will be applied. i.e. x -> x + att(x)
- positionwise_layer_type – linear of conv1d
- positionwise_conv_kernel_size – kernel size of positionwise conv1d layer
- padding_idx – padding_idx for input_layer=embed
- num_inf – number of inference output
Initializes internal Module state, shared by both nn.Module and ScriptModule.
forward(xs_pad: Tensor, ilens: Tensor, prev_states: Tensor | None = None) → Tuple[Tensor, Tensor, Tensor | None]
Embed positions in tensor.
- Parameters:
- xs_pad – input tensor (B, L, D)
- ilens – input length (B)
- prev_states – Not to be used now.
- Returns: position embedded tensor and mask
output_size() → int