espnet.nets.pytorch_backend.transducer.blocks.build_blocks
Less than 1 minute
espnet.nets.pytorch_backend.transducer.blocks.build_blocks
espnet.nets.pytorch_backend.transducer.blocks.build_blocks(net_part: str, idim: int, input_layer_type: str, blocks: List[Dict[str, Any]], repeat_block: int = 0, self_attn_type: str = 'self_attn', positional_encoding_type: str = 'abs_pos', positionwise_layer_type: str = 'linear', positionwise_activation_type: str = 'relu', conv_mod_activation_type: str = 'relu', input_layer_dropout_rate: float = 0.0, input_layer_pos_enc_dropout_rate: float = 0.0, padding_idx: int = -1) → Tuple[Conv2dSubsampling | VGG2L | Sequential, MultiSequential, int, int]
Build custom model blocks.
- Parameters:
- net_part – Network part, either ‘encoder’ or ‘decoder’.
- idim – Input dimension.
- input_layer – Input layer type.
- blocks – Blocks parameters for network part.
- repeat_block – Number of times provided blocks are repeated.
- positional_encoding_type – Positional encoding layer type.
- positionwise_layer_type – Positionwise layer type.
- positionwise_activation_type – Positionwise activation type.
- conv_mod_activation_type – Convolutional module activation type.
- input_layer_dropout_rate – Dropout rate for input layer.
- input_layer_pos_enc_dropout_rate – Dropout rate for input layer pos. enc.
- padding_idx – Padding symbol ID for embedding layer.
- Returns: Input layer all_blocks: Encoder/Decoder network. out_dim: Network output dimension. conv_subsampling_factor: Subsampling factor in frontend CNN.
- Return type: in_layer