espnet2.asr.encoder.hubert_encoder.FairseqHubertPretrainEncoder
espnet2.asr.encoder.hubert_encoder.FairseqHubertPretrainEncoder
class espnet2.asr.encoder.hubert_encoder.FairseqHubertPretrainEncoder(input_size: int = 1, output_size: int = 1024, linear_units: int = 1024, attention_heads: int = 12, num_blocks: int = 12, dropout_rate: float = 0.0, attention_dropout_rate: float = 0.0, activation_dropout_rate: float = 0.0, hubert_dict: str = './dict.txt', label_rate: int = 100, checkpoint_activations: bool = False, sample_rate: int = 16000, use_amp: bool = False, **kwargs)
Bases: AbsEncoder
FairSeq Hubert pretrain encoder module, only used for pretraining stage
- Parameters:
- input_size – input dim
- output_size – dimension of attention
- linear_units – dimension of feedforward layers
- attention_heads – the number of heads of multi head attention
- num_blocks – the number of encoder blocks
- dropout_rate – dropout rate
- attention_dropout_rate – dropout rate in attention
- hubert_dict – target dictionary for Hubert pretraining
- label_rate – label frame rate. -1 for sequence label
- sample_rate – target sample rate.
- use_amp – whether to use automatic mixed precision
- normalize_before – whether to use layer_norm before the first block
Initializes internal Module state, shared by both nn.Module and ScriptModule.
cast_mask_emb()
forward(xs_pad: Tensor, ilens: Tensor, ys_pad: Tensor, ys_pad_length: Tensor, prev_states: Tensor | None = None) → Tuple[Tensor, Tensor, Tensor | None]
Forward Hubert Pretrain Encoder.
- Parameters:
- xs_pad – input tensor (B, L, D)
- ilens – input length (B)
- prev_states – Not to be used now.
- Returns: position embedded tensor and mask
output_size() → int
reload_pretrained_parameters()