espnet2.hubert package

espnet2.hubert.hubert_loss

Hubert Pretrain Loss module.

class espnet2.hubert.hubert_loss.HubertPretrainLoss(pred_masked_weight: float = 1.0, pred_nomask_weight: float = 0.0, loss_weights: float = 10.0)[source]

Bases: torch.nn.modules.module.Module

Hubert criterion module.

Parameters
  • pred_masked_weight – weight for predictive loss for masked frames

  • pred_nomask_weight – weight for predictive loss for unmasked frames

  • loss_weights – weights for additional loss terms (not first one)

forward(model, enc_outputs, reduce=True)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

espnet2.hubert.__init__

espnet2.hubert.espnet_model

class espnet2.hubert.espnet_model.HubertPretrainModel(vocab_size: int, token_list: Union[Tuple[str, ...], List[str]], frontend: Optional[espnet2.asr.frontend.abs_frontend.AbsFrontend], specaug: Optional[espnet2.asr.specaug.abs_specaug.AbsSpecAug], normalize: Optional[espnet2.layers.abs_normalize.AbsNormalize], preencoder: Optional[espnet2.asr.preencoder.abs_preencoder.AbsPreEncoder], encoder: espnet2.asr.encoder.abs_encoder.AbsEncoder, ignore_id: int = -1, lsm_weight: float = 0.0, length_normalized_loss: bool = False, report_cer: bool = False, report_wer: bool = False, sym_space: str = '<space>', sym_blank: str = '<blank>', pred_masked_weight: float = 1.0, pred_nomask_weight: float = 0.0, loss_weights: float = 0.0)[source]

Bases: espnet2.train.abs_espnet_model.AbsESPnetModel

Hubert Pretrain model

collect_feats(speech: torch.Tensor, speech_lengths: torch.Tensor, text: torch.Tensor, text_lengths: torch.Tensor, **kwargs) → Dict[str, torch.Tensor][source]
compute_correct(logits)[source]
encode(speech: torch.Tensor, speech_lengths: torch.Tensor, y_pad: torch.Tensor, y_pad_length: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor][source]

Frontend + Encoder. Note that this method is used by asr_inference.py

Parameters
  • speech – (Batch, Length, …)

  • speech_lengths – (Batch, )

  • y_pad – (Batch, Length, …)

  • y_pad_length – (Batch, )

forward(speech: torch.Tensor, speech_lengths: torch.Tensor, text: torch.Tensor, text_lengths: torch.Tensor, **kwargs) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Frontend + Encoder + Calc loss

Parameters
  • speech – (Batch, Length, …)

  • speech_lengths – (Batch, )

  • text – (Batch, Length)

  • text_lengths – (Batch,)

  • kwargs – “utt_id” is among the input.