espnet2.asr.discrete_asr_espnet_model.ESPnetDiscreteASRModel
espnet2.asr.discrete_asr_espnet_model.ESPnetDiscreteASRModel
class espnet2.asr.discrete_asr_espnet_model.ESPnetDiscreteASRModel(vocab_size: int, token_list: Tuple[str, ...] | List[str], frontend: AbsFrontend | None, specaug: AbsSpecAug | None, preencoder: AbsPreEncoder | None, encoder: AbsEncoder, postencoder: AbsPostEncoder | None, decoder: AbsDecoder, ctc: CTC | None, ctc_weight: float = 0.5, interctc_weight: float = 0.0, src_vocab_size: int = 0, src_token_list: Tuple[str, ...] | List[str] = [], ignore_id: int = -1, lsm_weight: float = 0.0, length_normalized_loss: bool = False, report_bleu: bool = True, sym_space: str = '<space>', sym_blank: str = '<blank>', patch_size: int = 1, extract_feats_in_collect_stats: bool = True, share_decoder_input_output_embed: bool = False, share_encoder_decoder_input_embed: bool = False)
Bases: ESPnetMTModel
Encoder-Decoder model
Initializes internal Module state, shared by both nn.Module and ScriptModule.
encode(src_text: Tensor, src_text_lengths: Tensor) → Tuple[Tensor, Tensor]
Frontend + Encoder. Note that this method is used by mt_inference.py
- Parameters:
- src_text – (Batch, Length, …)
- src_text_lengths – (Batch, )
forward(text: Tensor, text_lengths: Tensor, src_text: Tensor, src_text_lengths: Tensor, **kwargs) → Tuple[Tensor, Dict[str, Tensor], Tensor]
Frontend + Encoder + Decoder + Calc loss
- Parameters:
- text – (Batch, Length)
- text_lengths – (Batch,)
- src_text – (Batch, length)
- src_text_lengths – (Batch,)
- kwargs – “utt_id” is among the input.