espnet2.asr.encoder.multiconvformer_encoder.MultiConvConformerEncoder
espnet2.asr.encoder.multiconvformer_encoder.MultiConvConformerEncoder
class espnet2.asr.encoder.multiconvformer_encoder.MultiConvConformerEncoder(input_size: int, output_size: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, cgmlp_linear_units: int = 2048, multicgmlp_type: str = 'concat_fusion', multicgmlp_kernel_sizes: int | str = '7,15,23,31', multicgmlp_merge_conv_kernel: int = 31, multicgmlp_use_non_linear: int = True, use_linear_after_conv: bool = False, gate_activation: str = 'identity', input_layer: str = 'conv2d', normalize_before: bool = True, concat_after: bool = False, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 3, macaron_style: bool = False, rel_pos_type: str = 'legacy', pos_enc_layer_type: str = 'rel_pos', selfattention_layer_type: str = 'rel_selfattn', activation_type: str = 'swish', use_cnn_module: bool = True, zero_triu: bool = False, padding_idx: int = -1, interctc_layer_idx: List[int] = [], interctc_use_conditioning: bool = False, stochastic_depth_rate: float | List[float] = 0.0, layer_drop_rate: float = 0.0, max_pos_emb_len: int = 5000)
Bases: AbsEncoder
Multiconvformer encoder module. Link to the paper: https://arxiv.org/abs/2407.03718
- Parameters:
- input_size (int) – Input dimension.
- output_size (int) – Dimension of attention.
- attention_heads (int) – The number of heads of multi head attention.
- linear_units (int) – The number of units of position-wise feed forward.
- num_blocks (int) – The number of decoder blocks.
- dropout_rate (float) – Dropout rate.
- positional_dropout_rate (float) – Dropout rate after adding positional encoding.
- attention_dropout_rate (float) – Dropout rate in attention.
- cgmlp_linear_units (int) – The number of units used in CGMLP block.
- multicgmlp_type (str) – “sum”, “weighted_sum”, “concat” or “concat_fusion”.
- multicgmlp_kernel_sizes (str) – Comma seperated list of kernel sizes.
- multicgmlp_merge_conv_kernel (int) – The number of kernels used in depthwise convolution fusion in MultiCGMLP.
- use_linear_after_conv (bool) – Whether to use a linear layer after MultiCGMLP.
- gate_activation (str) – The activation function used in CGMLP gating.
- input_layer (Union *[*str , torch.nn.Module ]) – Input layer type.
- normalize_before (bool) – Whether to use layer_norm before the first block.
- concat_after (bool) – Whether to concat attention layer’s input and output. If True, additional linear will be applied. i.e. x -> x + linear(concat(x, att(x))) If False, no additional linear will be applied. i.e. x -> x + att(x)
- positionwise_layer_type (str) – “linear”, “conv1d”, or “conv1d-linear”.
- positionwise_conv_kernel_size (int) – Kernel size of positionwise conv1d layer.
- rel_pos_type (str) – Whether to use the latest relative positional encoding or the legacy one. The legacy relative positional encoding will be deprecated in the future. More Details can be found in https://github.com/espnet/espnet/pull/2816.
- encoder_pos_enc_layer_type (str) – Encoder positional encoding layer type.
- encoder_attn_layer_type (str) – Encoder attention layer type.
- activation_type (str) – Encoder activation function type.
- macaron_style (bool) – Whether to use macaron style for positionwise layer.
- use_cnn_module (bool) – Whether to use convolution module.
- zero_triu (bool) – Whether to zero the upper triangular part of attention matrix.
- cnn_module_kernel (int) – Kernerl size of convolution module.
- padding_idx (int) – Padding idx for input_layer=embed.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
forward(xs_pad: Tensor, ilens: Tensor, prev_states: Tensor | None = None, ctc: CTC | None = None) → Tuple[Tensor, Tensor, Tensor | None]
Calculate forward propagation.
- Parameters:
- xs_pad (torch.Tensor) – Input tensor (#batch, L, input_size).
- ilens (torch.Tensor) – Input length (#batch).
- prev_states (torch.Tensor) – Not to be used now.
- Returns: Output tensor (#batch, L, output_size). torch.Tensor: Output length (#batch). torch.Tensor: Not to be used now.
- Return type: torch.Tensor
output_size() → int