espnet2.tts package

espnet2.tts.abs_tts

Text-to-speech abstrast class.

class espnet2.tts.abs_tts.AbsTTS(*args, **kwargs)[source]

Bases: torch.nn.modules.module.Module, abc.ABC

TTS abstract class.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

abstract forward(text: torch.Tensor, text_lengths: torch.Tensor, feats: torch.Tensor, feats_lengths: torch.Tensor, **kwargs) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Calculate outputs and return the loss tensor.

abstract inference(text: torch.Tensor, **kwargs) → Dict[str, torch.Tensor][source]

Return predicted output as a dict.

property require_raw_speech

Return whether or not raw_speech is required.

property require_vocoder

Return whether or not vocoder is required.

espnet2.tts.espnet_model

Text-to-speech ESPnet model.

class espnet2.tts.espnet_model.ESPnetTTSModel(feats_extract: Optional[espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract], pitch_extract: Optional[espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract], energy_extract: Optional[espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract], normalize: Optional[espnet2.layers.inversible_interface.InversibleInterface], pitch_normalize: Optional[espnet2.layers.inversible_interface.InversibleInterface], energy_normalize: Optional[espnet2.layers.inversible_interface.InversibleInterface], tts: espnet2.tts.abs_tts.AbsTTS)[source]

Bases: espnet2.train.abs_espnet_model.AbsESPnetModel

ESPnet model for text-to-speech task.

Initialize ESPnetTTSModel module.

collect_feats(text: torch.Tensor, text_lengths: torch.Tensor, speech: torch.Tensor, speech_lengths: torch.Tensor, durations: Optional[torch.Tensor] = None, durations_lengths: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, pitch_lengths: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, energy_lengths: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, **kwargs) → Dict[str, torch.Tensor][source]

Caclualte features and return them as a dict.

Parameters:
  • text (Tensor) – Text index tensor (B, T_text).

  • text_lengths (Tensor) – Text length tensor (B,).

  • speech (Tensor) – Speech waveform tensor (B, T_wav).

  • speech_lengths (Tensor) – Speech length tensor (B,).

  • durations (Optional[Tensor) – Duration tensor.

  • durations_lengths (Optional[Tensor) – Duration length tensor (B,).

  • pitch (Optional[Tensor) – Pitch tensor.

  • pitch_lengths (Optional[Tensor) – Pitch length tensor (B,).

  • energy (Optional[Tensor) – Energy tensor.

  • energy_lengths (Optional[Tensor) – Energy length tensor (B,).

  • spembs (Optional[Tensor]) – Speaker embedding tensor (B, D).

  • sids (Optional[Tensor]) – Speaker ID tensor (B, 1).

  • lids (Optional[Tensor]) – Language ID tensor (B, 1).

Returns:

Dict of features.

Return type:

Dict[str, Tensor]

forward(text: torch.Tensor, text_lengths: torch.Tensor, speech: torch.Tensor, speech_lengths: torch.Tensor, durations: Optional[torch.Tensor] = None, durations_lengths: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, pitch_lengths: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, energy_lengths: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, **kwargs) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Caclualte outputs and return the loss tensor.

Parameters:
  • text (Tensor) – Text index tensor (B, T_text).

  • text_lengths (Tensor) – Text length tensor (B,).

  • speech (Tensor) – Speech waveform tensor (B, T_wav).

  • speech_lengths (Tensor) – Speech length tensor (B,).

  • duration (Optional[Tensor]) – Duration tensor.

  • duration_lengths (Optional[Tensor]) – Duration length tensor (B,).

  • pitch (Optional[Tensor]) – Pitch tensor.

  • pitch_lengths (Optional[Tensor]) – Pitch length tensor (B,).

  • energy (Optional[Tensor]) – Energy tensor.

  • energy_lengths (Optional[Tensor]) – Energy length tensor (B,).

  • spembs (Optional[Tensor]) – Speaker embedding tensor (B, D).

  • sids (Optional[Tensor]) – Speaker ID tensor (B, 1).

  • lids (Optional[Tensor]) – Language ID tensor (B, 1).

  • kwargs – “utt_id” is among the input.

Returns:

Loss scalar tensor. Dict[str, float]: Statistics to be monitored. Tensor: Weight tensor to summarize losses.

Return type:

Tensor

inference(text: torch.Tensor, speech: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, durations: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, **decode_config) → Dict[str, torch.Tensor][source]

Caclualte features and return them as a dict.

Parameters:
  • text (Tensor) – Text index tensor (T_text).

  • speech (Tensor) – Speech waveform tensor (T_wav).

  • spembs (Optional[Tensor]) – Speaker embedding tensor (D,).

  • sids (Optional[Tensor]) – Speaker ID tensor (1,).

  • lids (Optional[Tensor]) – Language ID tensor (1,).

  • durations (Optional[Tensor) – Duration tensor.

  • pitch (Optional[Tensor) – Pitch tensor.

  • energy (Optional[Tensor) – Energy tensor.

Returns:

Dict of outputs.

Return type:

Dict[str, Tensor]

espnet2.tts.__init__

espnet2.tts.fastspeech.fastspeech

Fastspeech related modules for ESPnet2.

class espnet2.tts.fastspeech.fastspeech.FastSpeech(idim: int, odim: int, adim: int = 384, aheads: int = 4, elayers: int = 6, eunits: int = 1536, dlayers: int = 6, dunits: int = 1536, postnet_layers: int = 5, postnet_chans: int = 512, postnet_filts: int = 5, postnet_dropout_rate: float = 0.5, positionwise_layer_type: str = 'conv1d', positionwise_conv_kernel_size: int = 1, use_scaled_pos_enc: bool = True, use_batch_norm: bool = True, encoder_normalize_before: bool = True, decoder_normalize_before: bool = True, encoder_concat_after: bool = False, decoder_concat_after: bool = False, duration_predictor_layers: int = 2, duration_predictor_chans: int = 384, duration_predictor_kernel_size: int = 3, duration_predictor_dropout_rate: float = 0.1, reduction_factor: int = 1, encoder_type: str = 'transformer', decoder_type: str = 'transformer', transformer_enc_dropout_rate: float = 0.1, transformer_enc_positional_dropout_rate: float = 0.1, transformer_enc_attn_dropout_rate: float = 0.1, transformer_dec_dropout_rate: float = 0.1, transformer_dec_positional_dropout_rate: float = 0.1, transformer_dec_attn_dropout_rate: float = 0.1, conformer_rel_pos_type: str = 'legacy', conformer_pos_enc_layer_type: str = 'rel_pos', conformer_self_attn_layer_type: str = 'rel_selfattn', conformer_activation_type: str = 'swish', use_macaron_style_in_conformer: bool = True, use_cnn_in_conformer: bool = True, conformer_enc_kernel_size: int = 7, conformer_dec_kernel_size: int = 31, zero_triu: bool = False, spks: Optional[int] = None, langs: Optional[int] = None, spk_embed_dim: Optional[int] = None, spk_embed_integration_type: str = 'add', use_gst: bool = False, gst_tokens: int = 10, gst_heads: int = 4, gst_conv_layers: int = 6, gst_conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), gst_conv_kernel_size: int = 3, gst_conv_stride: int = 2, gst_gru_layers: int = 1, gst_gru_units: int = 128, init_type: str = 'xavier_uniform', init_enc_alpha: float = 1.0, init_dec_alpha: float = 1.0, use_masking: bool = False, use_weighted_masking: bool = False)[source]

Bases: espnet2.tts.abs_tts.AbsTTS

FastSpeech module for end-to-end text-to-speech.

This is a module of FastSpeech, feed-forward Transformer with duration predictor described in FastSpeech: Fast, Robust and Controllable Text to Speech, which does not require any auto-regressive processing during inference, resulting in fast decoding compared with auto-regressive Transformer.

Initialize FastSpeech module.

Parameters:
  • idim (int) – Dimension of the inputs.

  • odim (int) – Dimension of the outputs.

  • elayers (int) – Number of encoder layers.

  • eunits (int) – Number of encoder hidden units.

  • dlayers (int) – Number of decoder layers.

  • dunits (int) – Number of decoder hidden units.

  • postnet_layers (int) – Number of postnet layers.

  • postnet_chans (int) – Number of postnet channels.

  • postnet_filts (int) – Kernel size of postnet.

  • postnet_dropout_rate (float) – Dropout rate in postnet.

  • use_scaled_pos_enc (bool) – Whether to use trainable scaled pos encoding.

  • use_batch_norm (bool) – Whether to use batch normalization in encoder prenet.

  • encoder_normalize_before (bool) – Whether to apply layernorm layer before encoder block.

  • decoder_normalize_before (bool) – Whether to apply layernorm layer before decoder block.

  • encoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in encoder.

  • decoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in decoder.

  • duration_predictor_layers (int) – Number of duration predictor layers.

  • duration_predictor_chans (int) – Number of duration predictor channels.

  • duration_predictor_kernel_size (int) – Kernel size of duration predictor.

  • duration_predictor_dropout_rate (float) – Dropout rate in duration predictor.

  • reduction_factor (int) – Reduction factor.

  • encoder_type (str) – Encoder type (“transformer” or “conformer”).

  • decoder_type (str) – Decoder type (“transformer” or “conformer”).

  • transformer_enc_dropout_rate (float) – Dropout rate in encoder except attention and positional encoding.

  • transformer_enc_positional_dropout_rate (float) – Dropout rate after encoder positional encoding.

  • transformer_enc_attn_dropout_rate (float) – Dropout rate in encoder self-attention module.

  • transformer_dec_dropout_rate (float) – Dropout rate in decoder except attention & positional encoding.

  • transformer_dec_positional_dropout_rate (float) – Dropout rate after decoder positional encoding.

  • transformer_dec_attn_dropout_rate (float) – Dropout rate in decoder self-attention module.

  • conformer_rel_pos_type (str) – Relative pos encoding type in conformer.

  • conformer_pos_enc_layer_type (str) – Pos encoding layer type in conformer.

  • conformer_self_attn_layer_type (str) – Self-attention layer type in conformer

  • conformer_activation_type (str) – Activation function type in conformer.

  • use_macaron_style_in_conformer – Whether to use macaron style FFN.

  • use_cnn_in_conformer – Whether to use CNN in conformer.

  • conformer_enc_kernel_size – Kernel size of encoder conformer.

  • conformer_dec_kernel_size – Kernel size of decoder conformer.

  • zero_triu – Whether to use zero triu in relative self-attention module.

  • spks (Optional[int]) – Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.

  • langs (Optional[int]) – Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.

  • spk_embed_dim (Optional[int]) – Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.

  • spk_embed_integration_type – How to integrate speaker embedding.

  • use_gst (str) – Whether to use global style token.

  • gst_tokens (int) – The number of GST embeddings.

  • gst_heads (int) – The number of heads in GST multihead attention.

  • gst_conv_layers (int) – The number of conv layers in GST.

  • gst_conv_chans_list – (Sequence[int]): List of the number of channels of conv layers in GST.

  • gst_conv_kernel_size (int) – Kernel size of conv layers in GST.

  • gst_conv_stride (int) – Stride size of conv layers in GST.

  • gst_gru_layers (int) – The number of GRU layers in GST.

  • gst_gru_units (int) – The number of GRU units in GST.

  • init_type (str) – How to initialize transformer parameters.

  • init_enc_alpha (float) – Initial value of alpha in scaled pos encoding of the encoder.

  • init_dec_alpha (float) – Initial value of alpha in scaled pos encoding of the decoder.

  • use_masking (bool) – Whether to apply masking for padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to apply weighted masking in loss calculation.

forward(text: torch.Tensor, text_lengths: torch.Tensor, feats: torch.Tensor, feats_lengths: torch.Tensor, durations: torch.Tensor, durations_lengths: torch.Tensor, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, joint_training: bool = False) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • text (LongTensor) – Batch of padded character ids (B, T_text).

  • text_lengths (LongTensor) – Batch of lengths of each input (B,).

  • feats (Tensor) – Batch of padded target features (B, T_feats, odim).

  • feats_lengths (LongTensor) – Batch of the lengths of each target (B,).

  • durations (LongTensor) – Batch of padded durations (B, T_text + 1).

  • durations_lengths (LongTensor) – Batch of duration lengths (B, T_text + 1).

  • spembs (Optional[Tensor]) – Batch of speaker embeddings (B, spk_embed_dim).

  • sids (Optional[Tensor]) – Batch of speaker IDs (B, 1).

  • lids (Optional[Tensor]) – Batch of language IDs (B, 1).

  • joint_training (bool) – Whether to perform joint training with vocoder.

Returns:

Loss scalar value. Dict: Statistics to be monitored. Tensor: Weight value if not joint training else model outputs.

Return type:

Tensor

inference(text: torch.Tensor, feats: Optional[torch.Tensor] = None, durations: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, alpha: float = 1.0, use_teacher_forcing: bool = False) → Dict[str, torch.Tensor][source]

Generate the sequence of features given the sequences of characters.

Parameters:
  • text (LongTensor) – Input sequence of characters (T_text,).

  • feats (Optional[Tensor]) – Feature sequence to extract style (N, idim).

  • durations (Optional[LongTensor]) – Groundtruth of duration (T_text + 1,).

  • spembs (Optional[Tensor]) – Speaker embedding (spk_embed_dim,).

  • sids (Optional[Tensor]) – Speaker ID (1,).

  • lids (Optional[Tensor]) – Language ID (1,).

  • alpha (float) – Alpha to control the speed.

  • use_teacher_forcing (bool) – Whether to use teacher forcing. If true, groundtruth of duration, pitch and energy will be used.

Returns:

Output dict including the following items:
  • feat_gen (Tensor): Output sequence of features (T_feats, odim).

  • duration (Tensor): Duration sequence (T_text + 1,).

Return type:

Dict[str, Tensor]

espnet2.tts.fastspeech.__init__

espnet2.tts.tacotron2.__init__

espnet2.tts.tacotron2.tacotron2

Tacotron 2 related modules for ESPnet2.

class espnet2.tts.tacotron2.tacotron2.Tacotron2(idim: int, odim: int, embed_dim: int = 512, elayers: int = 1, eunits: int = 512, econv_layers: int = 3, econv_chans: int = 512, econv_filts: int = 5, atype: str = 'location', adim: int = 512, aconv_chans: int = 32, aconv_filts: int = 15, cumulate_att_w: bool = True, dlayers: int = 2, dunits: int = 1024, prenet_layers: int = 2, prenet_units: int = 256, postnet_layers: int = 5, postnet_chans: int = 512, postnet_filts: int = 5, output_activation: Optional[str] = None, use_batch_norm: bool = True, use_concate: bool = True, use_residual: bool = False, reduction_factor: int = 1, spks: Optional[int] = None, langs: Optional[int] = None, spk_embed_dim: Optional[int] = None, spk_embed_integration_type: str = 'concat', use_gst: bool = False, gst_tokens: int = 10, gst_heads: int = 4, gst_conv_layers: int = 6, gst_conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), gst_conv_kernel_size: int = 3, gst_conv_stride: int = 2, gst_gru_layers: int = 1, gst_gru_units: int = 128, dropout_rate: float = 0.5, zoneout_rate: float = 0.1, use_masking: bool = True, use_weighted_masking: bool = False, bce_pos_weight: float = 5.0, loss_type: str = 'L1+L2', use_guided_attn_loss: bool = True, guided_attn_loss_sigma: float = 0.4, guided_attn_loss_lambda: float = 1.0)[source]

Bases: espnet2.tts.abs_tts.AbsTTS

Tacotron2 module for end-to-end text-to-speech.

This is a module of Spectrogram prediction network in Tacotron2 described in Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions, which converts the sequence of characters into the sequence of Mel-filterbanks.

Initialize Tacotron2 module.

Parameters:
  • idim (int) – Dimension of the inputs.

  • odim – (int) Dimension of the outputs.

  • embed_dim (int) – Dimension of the token embedding.

  • elayers (int) – Number of encoder blstm layers.

  • eunits (int) – Number of encoder blstm units.

  • econv_layers (int) – Number of encoder conv layers.

  • econv_filts (int) – Number of encoder conv filter size.

  • econv_chans (int) – Number of encoder conv filter channels.

  • dlayers (int) – Number of decoder lstm layers.

  • dunits (int) – Number of decoder lstm units.

  • prenet_layers (int) – Number of prenet layers.

  • prenet_units (int) – Number of prenet units.

  • postnet_layers (int) – Number of postnet layers.

  • postnet_filts (int) – Number of postnet filter size.

  • postnet_chans (int) – Number of postnet filter channels.

  • output_activation (str) – Name of activation function for outputs.

  • adim (int) – Number of dimension of mlp in attention.

  • aconv_chans (int) – Number of attention conv filter channels.

  • aconv_filts (int) – Number of attention conv filter size.

  • cumulate_att_w (bool) – Whether to cumulate previous attention weight.

  • use_batch_norm (bool) – Whether to use batch normalization.

  • use_concate (bool) – Whether to concat enc outputs w/ dec lstm outputs.

  • reduction_factor (int) – Reduction factor.

  • spks (Optional[int]) – Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.

  • langs (Optional[int]) – Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.

  • spk_embed_dim (Optional[int]) – Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.

  • spk_embed_integration_type (str) – How to integrate speaker embedding.

  • use_gst (str) – Whether to use global style token.

  • gst_tokens (int) – Number of GST embeddings.

  • gst_heads (int) – Number of heads in GST multihead attention.

  • gst_conv_layers (int) – Number of conv layers in GST.

  • gst_conv_chans_list – (Sequence[int]): List of the number of channels of conv layers in GST.

  • gst_conv_kernel_size (int) – Kernel size of conv layers in GST.

  • gst_conv_stride (int) – Stride size of conv layers in GST.

  • gst_gru_layers (int) – Number of GRU layers in GST.

  • gst_gru_units (int) – Number of GRU units in GST.

  • dropout_rate (float) – Dropout rate.

  • zoneout_rate (float) – Zoneout rate.

  • use_masking (bool) – Whether to mask padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to apply weighted masking in loss calculation.

  • bce_pos_weight (float) – Weight of positive sample of stop token (only for use_masking=True).

  • loss_type (str) – Loss function type (“L1”, “L2”, or “L1+L2”).

  • use_guided_attn_loss (bool) – Whether to use guided attention loss.

  • guided_attn_loss_sigma (float) – Sigma in guided attention loss.

  • guided_attn_loss_lambda (float) – Lambda in guided attention loss.

forward(text: torch.Tensor, text_lengths: torch.Tensor, feats: torch.Tensor, feats_lengths: torch.Tensor, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, joint_training: bool = False) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • text (LongTensor) – Batch of padded character ids (B, T_text).

  • text_lengths (LongTensor) – Batch of lengths of each input batch (B,).

  • feats (Tensor) – Batch of padded target features (B, T_feats, odim).

  • feats_lengths (LongTensor) – Batch of the lengths of each target (B,).

  • spembs (Optional[Tensor]) – Batch of speaker embeddings (B, spk_embed_dim).

  • sids (Optional[Tensor]) – Batch of speaker IDs (B, 1).

  • lids (Optional[Tensor]) – Batch of language IDs (B, 1).

  • joint_training (bool) – Whether to perform joint training with vocoder.

Returns:

Loss scalar value. Dict: Statistics to be monitored. Tensor: Weight value if not joint training else model outputs.

Return type:

Tensor

inference(text: torch.Tensor, feats: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, threshold: float = 0.5, minlenratio: float = 0.0, maxlenratio: float = 10.0, use_att_constraint: bool = False, backward_window: int = 1, forward_window: int = 3, use_teacher_forcing: bool = False) → Dict[str, torch.Tensor][source]

Generate the sequence of features given the sequences of characters.

Parameters:
  • text (LongTensor) – Input sequence of characters (T_text,).

  • feats (Optional[Tensor]) – Feature sequence to extract style (N, idim).

  • spembs (Optional[Tensor]) – Speaker embedding (spk_embed_dim,).

  • sids (Optional[Tensor]) – Speaker ID (1,).

  • lids (Optional[Tensor]) – Language ID (1,).

  • threshold (float) – Threshold in inference.

  • minlenratio (float) – Minimum length ratio in inference.

  • maxlenratio (float) – Maximum length ratio in inference.

  • use_att_constraint (bool) – Whether to apply attention constraint.

  • backward_window (int) – Backward window in attention constraint.

  • forward_window (int) – Forward window in attention constraint.

  • use_teacher_forcing (bool) – Whether to use teacher forcing.

Returns:

Output dict including the following items:
  • feat_gen (Tensor): Output sequence of features (T_feats, odim).

  • prob (Tensor): Output sequence of stop probabilities (T_feats,).

  • att_w (Tensor): Attention weights (T_feats, T).

Return type:

Dict[str, Tensor]

espnet2.tts.feats_extract.log_mel_fbank

class espnet2.tts.feats_extract.log_mel_fbank.LogMelFbank(fs: Union[int, str] = 16000, n_fft: int = 1024, win_length: Optional[int] = None, hop_length: int = 256, window: Optional[str] = 'hann', center: bool = True, normalized: bool = False, onesided: bool = True, n_mels: int = 80, fmin: Optional[int] = 80, fmax: Optional[int] = 7600, htk: bool = False, log_base: Optional[float] = 10.0)[source]

Bases: espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract

Conventional frontend structure for TTS.

Stft -> amplitude-spec -> Log-Mel-Fbank

forward(input: torch.Tensor, input_lengths: torch.Tensor = None) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_parameters() → Dict[str, Any][source]

Return the parameters required by Vocoder

output_size() → int[source]

espnet2.tts.feats_extract.dio

F0 extractor using DIO + Stonemask algorithm.

class espnet2.tts.feats_extract.dio.Dio(fs: Union[int, str] = 22050, n_fft: int = 1024, hop_length: int = 256, f0min: int = 80, f0max: int = 400, use_token_averaged_f0: bool = True, use_continuous_f0: bool = True, use_log_f0: bool = True, reduction_factor: int = None)[source]

Bases: espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract

F0 estimation with dio + stonemask algorithm.

This is f0 extractor based on dio + stonmask algorithm introduced in WORLD: a vocoder-based high-quality speech synthesis system for real-time applications.

Note

This module is based on NumPy implementation. Therefore, the computational graph is not connected.

forward(input: torch.Tensor, input_lengths: torch.Tensor = None, feats_lengths: torch.Tensor = None, durations: torch.Tensor = None, durations_lengths: torch.Tensor = None) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_parameters() → Dict[str, Any][source]
output_size() → int[source]

espnet2.tts.feats_extract.linear_spectrogram

class espnet2.tts.feats_extract.linear_spectrogram.LinearSpectrogram(n_fft: int = 1024, win_length: Optional[int] = None, hop_length: int = 256, window: Optional[str] = 'hann', center: bool = True, normalized: bool = False, onesided: bool = True)[source]

Bases: espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract

Linear amplitude spectrogram.

Stft -> amplitude-spec

forward(input: torch.Tensor, input_lengths: torch.Tensor = None) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_parameters() → Dict[str, Any][source]

Return the parameters required by Vocoder.

output_size() → int[source]

espnet2.tts.feats_extract.abs_feats_extract

class espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract(*args, **kwargs)[source]

Bases: torch.nn.modules.module.Module, abc.ABC

Initializes internal Module state, shared by both nn.Module and ScriptModule.

abstract forward(input: torch.Tensor, input_lengths: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

abstract get_parameters() → Dict[str, Any][source]
abstract output_size() → int[source]

espnet2.tts.feats_extract.log_spectrogram

class espnet2.tts.feats_extract.log_spectrogram.LogSpectrogram(n_fft: int = 1024, win_length: Optional[int] = None, hop_length: int = 256, window: Optional[str] = 'hann', center: bool = True, normalized: bool = False, onesided: bool = True)[source]

Bases: espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract

Conventional frontend structure for ASR

Stft -> log-amplitude-spec

forward(input: torch.Tensor, input_lengths: torch.Tensor = None) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_parameters() → Dict[str, Any][source]

Return the parameters required by Vocoder

output_size() → int[source]

espnet2.tts.feats_extract.__init__

espnet2.tts.feats_extract.ying

class espnet2.tts.feats_extract.ying.Ying(fs: int = 22050, w_step: int = 256, W: int = 2048, tau_max: int = 2048, midi_start: int = -5, midi_end: int = 75, octave_range: int = 24, use_token_averaged_ying: bool = False)[source]

Bases: espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract

Extact Ying-based Features.

crop_scope(x, yin_start, scope_shift)[source]
forward(input: torch.Tensor, input_lengths: Optional[torch.Tensor] = None, feats_lengths: Optional[torch.Tensor] = None, durations: Optional[torch.Tensor] = None, durations_lengths: Optional[torch.Tensor] = None) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_parameters() → Dict[str, Any][source]
midi_to_lag(m: int, octave_range: float = 12)[source]

converts midi-to-lag, eq. (4)

Parameters:
  • m – midi

  • fs – sample_rate

  • octave_range

Returns:

time lag(tau, c(m)) calculated from midi, eq. (4)

Return type:

lag

output_size() → int[source]
yingram(x: torch.Tensor)[source]

calculates yingram from raw audio (multi segment)

Parameters:
  • x – raw audio, torch.Tensor of shape (t)

  • W – yingram Window Size

  • tau_max

  • fs – sampling rate

  • w_step – yingram bin step size

Returns:

yingram. torch.Tensor of shape (80 x t’)

Return type:

yingram

yingram_from_cmndf(cmndfs: torch.Tensor) → torch.Tensor[source]

yingram calculator from cMNDFs.

(cumulative Mean Normalized Difference Functions)

Parameters:
  • cmndfs – torch.Tensor calculated cumulative mean normalized difference function for details, see models/yin.py or eq. (1) and (2)

  • ms – list of midi(int)

  • fs – sampling rate

Returns:

calculated batch yingram

Return type:

y

espnet2.tts.feats_extract.yin

espnet2.tts.feats_extract.yin.cumulativeMeanNormalizedDifferenceFunction(df, N, eps=1e-08)[source]

Compute cumulative mean normalized difference function (CMND).

This corresponds to equation (8) in [1]

Parameters:
  • df – Difference function

  • N – length of data

Returns:

cumulative mean normalized difference function

Return type:

list

espnet2.tts.feats_extract.yin.cumulativeMeanNormalizedDifferenceFunctionTorch(dfs: torch.Tensor, N, eps=1e-08) → torch.Tensor[source]
espnet2.tts.feats_extract.yin.differenceFunction(x, N, tau_max)[source]

Compute difference function of data x. This corresponds to equation (6) in [1]

This solution is implemented directly with torch rfft.

Parameters:
  • x – audio data (Tensor)

  • N – length of data

  • tau_max – integration window size

Returns:

difference function

Return type:

list

espnet2.tts.feats_extract.yin.differenceFunctionTorch(xs: torch.Tensor, N, tau_max) → torch.Tensor[source]

pytorch backend batch-wise differenceFunction

has 1e-4 level error with input shape of (32, 22050*1.5) :param xs: :param N: :param tau_max:

Returns:

espnet2.tts.feats_extract.yin.differenceFunction_np(x, N, tau_max)[source]

Compute difference function of data x. This corresponds to equation (6) in [1]

This solution is implemented directly with Numpy fft.

Parameters:
  • x – audio data

  • N – length of data

  • tau_max – integration window size

Returns:

difference function

Return type:

list

espnet2.tts.feats_extract.energy

Energy extractor.

class espnet2.tts.feats_extract.energy.Energy(fs: Union[int, str] = 22050, n_fft: int = 1024, win_length: Optional[int] = None, hop_length: int = 256, window: str = 'hann', center: bool = True, normalized: bool = False, onesided: bool = True, use_token_averaged_energy: bool = True, reduction_factor: Optional[int] = None)[source]

Bases: espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract

Energy extractor.

forward(input: torch.Tensor, input_lengths: torch.Tensor = None, feats_lengths: torch.Tensor = None, durations: torch.Tensor = None, durations_lengths: torch.Tensor = None) → Tuple[torch.Tensor, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_parameters() → Dict[str, Any][source]
output_size() → int[source]

espnet2.tts.fastspeech2.loss

Fastspeech2 related loss module for ESPnet2.

class espnet2.tts.fastspeech2.loss.FastSpeech2Loss(use_masking: bool = True, use_weighted_masking: bool = False)[source]

Bases: torch.nn.modules.module.Module

Loss function module for FastSpeech2.

Initialize feed-forward Transformer loss module.

Parameters:
  • use_masking (bool) – Whether to apply masking for padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to weighted masking in loss calculation.

forward(after_outs: torch.Tensor, before_outs: torch.Tensor, d_outs: torch.Tensor, p_outs: torch.Tensor, e_outs: torch.Tensor, ys: torch.Tensor, ds: torch.Tensor, ps: torch.Tensor, es: torch.Tensor, ilens: torch.Tensor, olens: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • after_outs (Tensor) – Batch of outputs after postnets (B, T_feats, odim).

  • before_outs (Tensor) – Batch of outputs before postnets (B, T_feats, odim).

  • d_outs (LongTensor) – Batch of outputs of duration predictor (B, T_text).

  • p_outs (Tensor) – Batch of outputs of pitch predictor (B, T_text, 1).

  • e_outs (Tensor) – Batch of outputs of energy predictor (B, T_text, 1).

  • ys (Tensor) – Batch of target features (B, T_feats, odim).

  • ds (LongTensor) – Batch of durations (B, T_text).

  • ps (Tensor) – Batch of target token-averaged pitch (B, T_text, 1).

  • es (Tensor) – Batch of target token-averaged energy (B, T_text, 1).

  • ilens (LongTensor) – Batch of the lengths of each input (B,).

  • olens (LongTensor) – Batch of the lengths of each target (B,).

Returns:

L1 loss value. Tensor: Duration predictor loss value. Tensor: Pitch predictor loss value. Tensor: Energy predictor loss value.

Return type:

Tensor

espnet2.tts.fastspeech2.variance_predictor

Variance predictor related modules.

class espnet2.tts.fastspeech2.variance_predictor.VariancePredictor(idim: int, n_layers: int = 2, n_chans: int = 384, kernel_size: int = 3, bias: bool = True, dropout_rate: float = 0.5)[source]

Bases: torch.nn.modules.module.Module

Variance predictor module.

This is a module of variacne predictor described in FastSpeech 2: Fast and High-Quality End-to-End Text to Speech.

Initilize duration predictor module.

Parameters:
  • idim (int) – Input dimension.

  • n_layers (int) – Number of convolutional layers.

  • n_chans (int) – Number of channels of convolutional layers.

  • kernel_size (int) – Kernel size of convolutional layers.

  • dropout_rate (float) – Dropout rate.

forward(xs: torch.Tensor, x_masks: torch.Tensor = None) → torch.Tensor[source]

Calculate forward propagation.

Parameters:
  • xs (Tensor) – Batch of input sequences (B, Tmax, idim).

  • x_masks (ByteTensor) – Batch of masks indicating padded part (B, Tmax).

Returns:

Batch of predicted sequences (B, Tmax, 1).

Return type:

Tensor

espnet2.tts.fastspeech2.__init__

espnet2.tts.fastspeech2.fastspeech2

Fastspeech2 related modules for ESPnet2.

class espnet2.tts.fastspeech2.fastspeech2.FastSpeech2(idim: int, odim: int, adim: int = 384, aheads: int = 4, elayers: int = 6, eunits: int = 1536, dlayers: int = 6, dunits: int = 1536, postnet_layers: int = 5, postnet_chans: int = 512, postnet_filts: int = 5, postnet_dropout_rate: float = 0.5, positionwise_layer_type: str = 'conv1d', positionwise_conv_kernel_size: int = 1, use_scaled_pos_enc: bool = True, use_batch_norm: bool = True, encoder_normalize_before: bool = True, decoder_normalize_before: bool = True, encoder_concat_after: bool = False, decoder_concat_after: bool = False, reduction_factor: int = 1, encoder_type: str = 'transformer', decoder_type: str = 'transformer', transformer_enc_dropout_rate: float = 0.1, transformer_enc_positional_dropout_rate: float = 0.1, transformer_enc_attn_dropout_rate: float = 0.1, transformer_dec_dropout_rate: float = 0.1, transformer_dec_positional_dropout_rate: float = 0.1, transformer_dec_attn_dropout_rate: float = 0.1, conformer_rel_pos_type: str = 'legacy', conformer_pos_enc_layer_type: str = 'rel_pos', conformer_self_attn_layer_type: str = 'rel_selfattn', conformer_activation_type: str = 'swish', use_macaron_style_in_conformer: bool = True, use_cnn_in_conformer: bool = True, zero_triu: bool = False, conformer_enc_kernel_size: int = 7, conformer_dec_kernel_size: int = 31, duration_predictor_layers: int = 2, duration_predictor_chans: int = 384, duration_predictor_kernel_size: int = 3, duration_predictor_dropout_rate: float = 0.1, energy_predictor_layers: int = 2, energy_predictor_chans: int = 384, energy_predictor_kernel_size: int = 3, energy_predictor_dropout: float = 0.5, energy_embed_kernel_size: int = 9, energy_embed_dropout: float = 0.5, stop_gradient_from_energy_predictor: bool = False, pitch_predictor_layers: int = 2, pitch_predictor_chans: int = 384, pitch_predictor_kernel_size: int = 3, pitch_predictor_dropout: float = 0.5, pitch_embed_kernel_size: int = 9, pitch_embed_dropout: float = 0.5, stop_gradient_from_pitch_predictor: bool = False, spks: Optional[int] = None, langs: Optional[int] = None, spk_embed_dim: Optional[int] = None, spk_embed_integration_type: str = 'add', use_gst: bool = False, gst_tokens: int = 10, gst_heads: int = 4, gst_conv_layers: int = 6, gst_conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), gst_conv_kernel_size: int = 3, gst_conv_stride: int = 2, gst_gru_layers: int = 1, gst_gru_units: int = 128, init_type: str = 'xavier_uniform', init_enc_alpha: float = 1.0, init_dec_alpha: float = 1.0, use_masking: bool = False, use_weighted_masking: bool = False)[source]

Bases: espnet2.tts.abs_tts.AbsTTS

FastSpeech2 module.

This is a module of FastSpeech2 described in FastSpeech 2: Fast and High-Quality End-to-End Text to Speech. Instead of quantized pitch and energy, we use token-averaged value introduced in FastPitch: Parallel Text-to-speech with Pitch Prediction.

Initialize FastSpeech2 module.

Parameters:
  • idim (int) – Dimension of the inputs.

  • odim (int) – Dimension of the outputs.

  • elayers (int) – Number of encoder layers.

  • eunits (int) – Number of encoder hidden units.

  • dlayers (int) – Number of decoder layers.

  • dunits (int) – Number of decoder hidden units.

  • postnet_layers (int) – Number of postnet layers.

  • postnet_chans (int) – Number of postnet channels.

  • postnet_filts (int) – Kernel size of postnet.

  • postnet_dropout_rate (float) – Dropout rate in postnet.

  • use_scaled_pos_enc (bool) – Whether to use trainable scaled pos encoding.

  • use_batch_norm (bool) – Whether to use batch normalization in encoder prenet.

  • encoder_normalize_before (bool) – Whether to apply layernorm layer before encoder block.

  • decoder_normalize_before (bool) – Whether to apply layernorm layer before decoder block.

  • encoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in encoder.

  • decoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in decoder.

  • reduction_factor (int) – Reduction factor.

  • encoder_type (str) – Encoder type (“transformer” or “conformer”).

  • decoder_type (str) – Decoder type (“transformer” or “conformer”).

  • transformer_enc_dropout_rate (float) – Dropout rate in encoder except attention and positional encoding.

  • transformer_enc_positional_dropout_rate (float) – Dropout rate after encoder positional encoding.

  • transformer_enc_attn_dropout_rate (float) – Dropout rate in encoder self-attention module.

  • transformer_dec_dropout_rate (float) – Dropout rate in decoder except attention & positional encoding.

  • transformer_dec_positional_dropout_rate (float) – Dropout rate after decoder positional encoding.

  • transformer_dec_attn_dropout_rate (float) – Dropout rate in decoder self-attention module.

  • conformer_rel_pos_type (str) – Relative pos encoding type in conformer.

  • conformer_pos_enc_layer_type (str) – Pos encoding layer type in conformer.

  • conformer_self_attn_layer_type (str) – Self-attention layer type in conformer

  • conformer_activation_type (str) – Activation function type in conformer.

  • use_macaron_style_in_conformer – Whether to use macaron style FFN.

  • use_cnn_in_conformer – Whether to use CNN in conformer.

  • zero_triu – Whether to use zero triu in relative self-attention module.

  • conformer_enc_kernel_size – Kernel size of encoder conformer.

  • conformer_dec_kernel_size – Kernel size of decoder conformer.

  • duration_predictor_layers (int) – Number of duration predictor layers.

  • duration_predictor_chans (int) – Number of duration predictor channels.

  • duration_predictor_kernel_size (int) – Kernel size of duration predictor.

  • duration_predictor_dropout_rate (float) – Dropout rate in duration predictor.

  • pitch_predictor_layers (int) – Number of pitch predictor layers.

  • pitch_predictor_chans (int) – Number of pitch predictor channels.

  • pitch_predictor_kernel_size (int) – Kernel size of pitch predictor.

  • pitch_predictor_dropout_rate (float) – Dropout rate in pitch predictor.

  • pitch_embed_kernel_size (float) – Kernel size of pitch embedding.

  • pitch_embed_dropout_rate (float) – Dropout rate for pitch embedding.

  • stop_gradient_from_pitch_predictor – Whether to stop gradient from pitch predictor to encoder.

  • energy_predictor_layers (int) – Number of energy predictor layers.

  • energy_predictor_chans (int) – Number of energy predictor channels.

  • energy_predictor_kernel_size (int) – Kernel size of energy predictor.

  • energy_predictor_dropout_rate (float) – Dropout rate in energy predictor.

  • energy_embed_kernel_size (float) – Kernel size of energy embedding.

  • energy_embed_dropout_rate (float) – Dropout rate for energy embedding.

  • stop_gradient_from_energy_predictor – Whether to stop gradient from energy predictor to encoder.

  • spks (Optional[int]) – Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.

  • langs (Optional[int]) – Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.

  • spk_embed_dim (Optional[int]) – Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.

  • spk_embed_integration_type – How to integrate speaker embedding.

  • use_gst (str) – Whether to use global style token.

  • gst_tokens (int) – The number of GST embeddings.

  • gst_heads (int) – The number of heads in GST multihead attention.

  • gst_conv_layers (int) – The number of conv layers in GST.

  • gst_conv_chans_list – (Sequence[int]): List of the number of channels of conv layers in GST.

  • gst_conv_kernel_size (int) – Kernel size of conv layers in GST.

  • gst_conv_stride (int) – Stride size of conv layers in GST.

  • gst_gru_layers (int) – The number of GRU layers in GST.

  • gst_gru_units (int) – The number of GRU units in GST.

  • init_type (str) – How to initialize transformer parameters.

  • init_enc_alpha (float) – Initial value of alpha in scaled pos encoding of the encoder.

  • init_dec_alpha (float) – Initial value of alpha in scaled pos encoding of the decoder.

  • use_masking (bool) – Whether to apply masking for padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to apply weighted masking in loss calculation.

forward(text: torch.Tensor, text_lengths: torch.Tensor, feats: torch.Tensor, feats_lengths: torch.Tensor, durations: torch.Tensor, durations_lengths: torch.Tensor, pitch: torch.Tensor, pitch_lengths: torch.Tensor, energy: torch.Tensor, energy_lengths: torch.Tensor, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, joint_training: bool = False) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • text (LongTensor) – Batch of padded token ids (B, T_text).

  • text_lengths (LongTensor) – Batch of lengths of each input (B,).

  • feats (Tensor) – Batch of padded target features (B, T_feats, odim).

  • feats_lengths (LongTensor) – Batch of the lengths of each target (B,).

  • durations (LongTensor) – Batch of padded durations (B, T_text + 1).

  • durations_lengths (LongTensor) – Batch of duration lengths (B, T_text + 1).

  • pitch (Tensor) – Batch of padded token-averaged pitch (B, T_text + 1, 1).

  • pitch_lengths (LongTensor) – Batch of pitch lengths (B, T_text + 1).

  • energy (Tensor) – Batch of padded token-averaged energy (B, T_text + 1, 1).

  • energy_lengths (LongTensor) – Batch of energy lengths (B, T_text + 1).

  • spembs (Optional[Tensor]) – Batch of speaker embeddings (B, spk_embed_dim).

  • sids (Optional[Tensor]) – Batch of speaker IDs (B, 1).

  • lids (Optional[Tensor]) – Batch of language IDs (B, 1).

  • joint_training (bool) – Whether to perform joint training with vocoder.

Returns:

Loss scalar value. Dict: Statistics to be monitored. Tensor: Weight value if not joint training else model outputs.

Return type:

Tensor

inference(text: torch.Tensor, feats: Optional[torch.Tensor] = None, durations: Optional[torch.Tensor] = None, spembs: torch.Tensor = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, alpha: float = 1.0, use_teacher_forcing: bool = False) → Dict[str, torch.Tensor][source]

Generate the sequence of features given the sequences of characters.

Parameters:
  • text (LongTensor) – Input sequence of characters (T_text,).

  • feats (Optional[Tensor) – Feature sequence to extract style (N, idim).

  • durations (Optional[Tensor) – Groundtruth of duration (T_text + 1,).

  • spembs (Optional[Tensor) – Speaker embedding vector (spk_embed_dim,).

  • sids (Optional[Tensor]) – Speaker ID (1,).

  • lids (Optional[Tensor]) – Language ID (1,).

  • pitch (Optional[Tensor]) – Groundtruth of token-avg pitch (T_text + 1, 1).

  • energy (Optional[Tensor]) – Groundtruth of token-avg energy (T_text + 1, 1).

  • alpha (float) – Alpha to control the speed.

  • use_teacher_forcing (bool) – Whether to use teacher forcing. If true, groundtruth of duration, pitch and energy will be used.

Returns:

Output dict including the following items:
  • feat_gen (Tensor): Output sequence of features (T_feats, odim).

  • duration (Tensor): Duration sequence (T_text + 1,).

  • pitch (Tensor): Pitch sequence (T_text + 1,).

  • energy (Tensor): Energy sequence (T_text + 1,).

Return type:

Dict[str, Tensor]

espnet2.tts.transformer.transformer

Transformer-TTS related modules.

class espnet2.tts.transformer.transformer.Transformer(idim: int, odim: int, embed_dim: int = 512, eprenet_conv_layers: int = 3, eprenet_conv_chans: int = 256, eprenet_conv_filts: int = 5, dprenet_layers: int = 2, dprenet_units: int = 256, elayers: int = 6, eunits: int = 1024, adim: int = 512, aheads: int = 4, dlayers: int = 6, dunits: int = 1024, postnet_layers: int = 5, postnet_chans: int = 256, postnet_filts: int = 5, positionwise_layer_type: str = 'conv1d', positionwise_conv_kernel_size: int = 1, use_scaled_pos_enc: bool = True, use_batch_norm: bool = True, encoder_normalize_before: bool = True, decoder_normalize_before: bool = True, encoder_concat_after: bool = False, decoder_concat_after: bool = False, reduction_factor: int = 1, spks: Optional[int] = None, langs: Optional[int] = None, spk_embed_dim: Optional[int] = None, spk_embed_integration_type: str = 'add', use_gst: bool = False, gst_tokens: int = 10, gst_heads: int = 4, gst_conv_layers: int = 6, gst_conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), gst_conv_kernel_size: int = 3, gst_conv_stride: int = 2, gst_gru_layers: int = 1, gst_gru_units: int = 128, transformer_enc_dropout_rate: float = 0.1, transformer_enc_positional_dropout_rate: float = 0.1, transformer_enc_attn_dropout_rate: float = 0.1, transformer_dec_dropout_rate: float = 0.1, transformer_dec_positional_dropout_rate: float = 0.1, transformer_dec_attn_dropout_rate: float = 0.1, transformer_enc_dec_attn_dropout_rate: float = 0.1, eprenet_dropout_rate: float = 0.5, dprenet_dropout_rate: float = 0.5, postnet_dropout_rate: float = 0.5, init_type: str = 'xavier_uniform', init_enc_alpha: float = 1.0, init_dec_alpha: float = 1.0, use_masking: bool = False, use_weighted_masking: bool = False, bce_pos_weight: float = 5.0, loss_type: str = 'L1', use_guided_attn_loss: bool = True, num_heads_applied_guided_attn: int = 2, num_layers_applied_guided_attn: int = 2, modules_applied_guided_attn: Sequence[str] = 'encoder-decoder', guided_attn_loss_sigma: float = 0.4, guided_attn_loss_lambda: float = 1.0)[source]

Bases: espnet2.tts.abs_tts.AbsTTS

Transformer-TTS module.

This is a module of text-to-speech Transformer described in Neural Speech Synthesis with Transformer Network, which convert the sequence of tokens into the sequence of Mel-filterbanks.

Initialize Transformer module.

Parameters:
  • idim (int) – Dimension of the inputs.

  • odim (int) – Dimension of the outputs.

  • embed_dim (int) – Dimension of character embedding.

  • eprenet_conv_layers (int) – Number of encoder prenet convolution layers.

  • eprenet_conv_chans (int) – Number of encoder prenet convolution channels.

  • eprenet_conv_filts (int) – Filter size of encoder prenet convolution.

  • dprenet_layers (int) – Number of decoder prenet layers.

  • dprenet_units (int) – Number of decoder prenet hidden units.

  • elayers (int) – Number of encoder layers.

  • eunits (int) – Number of encoder hidden units.

  • adim (int) – Number of attention transformation dimensions.

  • aheads (int) – Number of heads for multi head attention.

  • dlayers (int) – Number of decoder layers.

  • dunits (int) – Number of decoder hidden units.

  • postnet_layers (int) – Number of postnet layers.

  • postnet_chans (int) – Number of postnet channels.

  • postnet_filts (int) – Filter size of postnet.

  • use_scaled_pos_enc (bool) – Whether to use trainable scaled pos encoding.

  • use_batch_norm (bool) – Whether to use batch normalization in encoder prenet.

  • encoder_normalize_before (bool) – Whether to apply layernorm layer before encoder block.

  • decoder_normalize_before (bool) – Whether to apply layernorm layer before decoder block.

  • encoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in encoder.

  • decoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in decoder.

  • positionwise_layer_type (str) – Position-wise operation type.

  • positionwise_conv_kernel_size (int) – Kernel size in position wise conv 1d.

  • reduction_factor (int) – Reduction factor.

  • spks (Optional[int]) – Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.

  • langs (Optional[int]) – Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.

  • spk_embed_dim (Optional[int]) – Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.

  • spk_embed_integration_type (str) – How to integrate speaker embedding.

  • use_gst (str) – Whether to use global style token.

  • gst_tokens (int) – Number of GST embeddings.

  • gst_heads (int) – Number of heads in GST multihead attention.

  • gst_conv_layers (int) – Number of conv layers in GST.

  • gst_conv_chans_list – (Sequence[int]): List of the number of channels of conv layers in GST.

  • gst_conv_kernel_size (int) – Kernel size of conv layers in GST.

  • gst_conv_stride (int) – Stride size of conv layers in GST.

  • gst_gru_layers (int) – Number of GRU layers in GST.

  • gst_gru_units (int) – Number of GRU units in GST.

  • transformer_lr (float) – Initial value of learning rate.

  • transformer_warmup_steps (int) – Optimizer warmup steps.

  • transformer_enc_dropout_rate (float) – Dropout rate in encoder except attention and positional encoding.

  • transformer_enc_positional_dropout_rate (float) – Dropout rate after encoder positional encoding.

  • transformer_enc_attn_dropout_rate (float) – Dropout rate in encoder self-attention module.

  • transformer_dec_dropout_rate (float) – Dropout rate in decoder except attention & positional encoding.

  • transformer_dec_positional_dropout_rate (float) – Dropout rate after decoder positional encoding.

  • transformer_dec_attn_dropout_rate (float) – Dropout rate in decoder self-attention module.

  • transformer_enc_dec_attn_dropout_rate (float) – Dropout rate in source attention module.

  • init_type (str) – How to initialize transformer parameters.

  • init_enc_alpha (float) – Initial value of alpha in scaled pos encoding of the encoder.

  • init_dec_alpha (float) – Initial value of alpha in scaled pos encoding of the decoder.

  • eprenet_dropout_rate (float) – Dropout rate in encoder prenet.

  • dprenet_dropout_rate (float) – Dropout rate in decoder prenet.

  • postnet_dropout_rate (float) – Dropout rate in postnet.

  • use_masking (bool) – Whether to apply masking for padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to apply weighted masking in loss calculation.

  • bce_pos_weight (float) – Positive sample weight in bce calculation (only for use_masking=true).

  • loss_type (str) – How to calculate loss.

  • use_guided_attn_loss (bool) – Whether to use guided attention loss.

  • num_heads_applied_guided_attn (int) – Number of heads in each layer to apply guided attention loss.

  • num_layers_applied_guided_attn (int) – Number of layers to apply guided attention loss.

  • modules_applied_guided_attn (Sequence[str]) – List of module names to apply guided attention loss.

  • guided_attn_loss_sigma (float) –

  • guided_attn_loss_lambda (float) – Lambda in guided attention loss.

forward(text: torch.Tensor, text_lengths: torch.Tensor, feats: torch.Tensor, feats_lengths: torch.Tensor, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, joint_training: bool = False) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • text (LongTensor) – Batch of padded character ids (B, Tmax).

  • text_lengths (LongTensor) – Batch of lengths of each input batch (B,).

  • feats (Tensor) – Batch of padded target features (B, Lmax, odim).

  • feats_lengths (LongTensor) – Batch of the lengths of each target (B,).

  • spembs (Optional[Tensor]) – Batch of speaker embeddings (B, spk_embed_dim).

  • sids (Optional[Tensor]) – Batch of speaker IDs (B, 1).

  • lids (Optional[Tensor]) – Batch of language IDs (B, 1).

  • joint_training (bool) – Whether to perform joint training with vocoder.

Returns:

Loss scalar value. Dict: Statistics to be monitored. Tensor: Weight value if not joint training else model outputs.

Return type:

Tensor

inference(text: torch.Tensor, feats: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, threshold: float = 0.5, minlenratio: float = 0.0, maxlenratio: float = 10.0, use_teacher_forcing: bool = False) → Dict[str, torch.Tensor][source]

Generate the sequence of features given the sequences of characters.

Parameters:
  • text (LongTensor) – Input sequence of characters (T_text,).

  • feats (Optional[Tensor]) – Feature sequence to extract style embedding (T_feats’, idim).

  • spembs (Optional[Tensor]) – Speaker embedding (spk_embed_dim,).

  • sids (Optional[Tensor]) – Speaker ID (1,).

  • lids (Optional[Tensor]) – Language ID (1,).

  • threshold (float) – Threshold in inference.

  • minlenratio (float) – Minimum length ratio in inference.

  • maxlenratio (float) – Maximum length ratio in inference.

  • use_teacher_forcing (bool) – Whether to use teacher forcing.

Returns:

Output dict including the following items:
  • feat_gen (Tensor): Output sequence of features (T_feats, odim).

  • prob (Tensor): Output sequence of stop probabilities (T_feats,).

  • att_w (Tensor): Source attn weight (#layers, #heads, T_feats, T_text).

Return type:

Dict[str, Tensor]

espnet2.tts.transformer.__init__

espnet2.tts.utils.parallel_wavegan_pretrained_vocoder

Wrapper class for the vocoder model trained with parallel_wavegan repo.

class espnet2.tts.utils.parallel_wavegan_pretrained_vocoder.ParallelWaveGANPretrainedVocoder(model_file: Union[pathlib.Path, str], config_file: Union[pathlib.Path, str, None] = None)[source]

Bases: torch.nn.modules.module.Module

Wrapper class to load the vocoder trained with parallel_wavegan repo.

Initialize ParallelWaveGANPretrainedVocoder module.

forward(feats: torch.Tensor) → torch.Tensor[source]

Generate waveform with pretrained vocoder.

Parameters:

feats (Tensor) – Feature tensor (T_feats, #mels).

Returns:

Generated waveform tensor (T_wav).

Return type:

Tensor

espnet2.tts.utils.__init__

class espnet2.tts.utils.__init__.DurationCalculator(*args, **kwargs)[source]

Bases: torch.nn.modules.module.Module

Duration calculator module.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(att_ws: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor][source]

Convert attention weight to durations.

Parameters:

att_ws (Tesnor) – Attention weight tensor (T_feats, T_text) or (#layers, #heads, T_feats, T_text).

Returns:

Duration of each input (T_text,). Tensor: Focus rate value.

Return type:

LongTensor

class espnet2.tts.utils.__init__.ParallelWaveGANPretrainedVocoder(model_file: Union[pathlib.Path, str], config_file: Union[pathlib.Path, str, None] = None)[source]

Bases: torch.nn.modules.module.Module

Wrapper class to load the vocoder trained with parallel_wavegan repo.

Initialize ParallelWaveGANPretrainedVocoder module.

forward(feats: torch.Tensor) → torch.Tensor[source]

Generate waveform with pretrained vocoder.

Parameters:

feats (Tensor) – Feature tensor (T_feats, #mels).

Returns:

Generated waveform tensor (T_wav).

Return type:

Tensor

espnet2.tts.utils.duration_calculator

Duration calculator for ESPnet2.

class espnet2.tts.utils.duration_calculator.DurationCalculator(*args, **kwargs)[source]

Bases: torch.nn.modules.module.Module

Duration calculator module.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(att_ws: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor][source]

Convert attention weight to durations.

Parameters:

att_ws (Tesnor) – Attention weight tensor (T_feats, T_text) or (#layers, #heads, T_feats, T_text).

Returns:

Duration of each input (T_text,). Tensor: Focus rate value.

Return type:

LongTensor

espnet2.tts.gst.style_encoder

Style encoder of GST-Tacotron.

class espnet2.tts.gst.style_encoder.MultiHeadedAttention(q_dim, k_dim, v_dim, n_head, n_feat, dropout_rate=0.0)[source]

Bases: espnet.nets.pytorch_backend.transformer.attention.MultiHeadedAttention

Multi head attention module with different input dimension.

Initialize multi head attention module.

class espnet2.tts.gst.style_encoder.ReferenceEncoder(idim=80, conv_layers: int = 6, conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), conv_kernel_size: int = 3, conv_stride: int = 2, gru_layers: int = 1, gru_units: int = 128)[source]

Bases: torch.nn.modules.module.Module

Reference encoder module.

This module is reference encoder introduced in Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis.

Parameters:
  • idim (int, optional) – Dimension of the input mel-spectrogram.

  • conv_layers (int, optional) – The number of conv layers in the reference encoder.

  • conv_chans_list – (Sequence[int], optional): List of the number of channels of conv layers in the referece encoder.

  • conv_kernel_size (int, optional) – Kernel size of conv layers in the reference encoder.

  • conv_stride (int, optional) – Stride size of conv layers in the reference encoder.

  • gru_layers (int, optional) – The number of GRU layers in the reference encoder.

  • gru_units (int, optional) – The number of GRU units in the reference encoder.

Initilize reference encoder module.

forward(speech: torch.Tensor) → torch.Tensor[source]

Calculate forward propagation.

Parameters:

speech (Tensor) – Batch of padded target features (B, Lmax, idim).

Returns:

Reference embedding (B, gru_units)

Return type:

Tensor

class espnet2.tts.gst.style_encoder.StyleEncoder(idim: int = 80, gst_tokens: int = 10, gst_token_dim: int = 256, gst_heads: int = 4, conv_layers: int = 6, conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), conv_kernel_size: int = 3, conv_stride: int = 2, gru_layers: int = 1, gru_units: int = 128)[source]

Bases: torch.nn.modules.module.Module

Style encoder.

This module is style encoder introduced in Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis.

Parameters:
  • idim (int, optional) – Dimension of the input mel-spectrogram.

  • gst_tokens (int, optional) – The number of GST embeddings.

  • gst_token_dim (int, optional) – Dimension of each GST embedding.

  • gst_heads (int, optional) – The number of heads in GST multihead attention.

  • conv_layers (int, optional) – The number of conv layers in the reference encoder.

  • conv_chans_list – (Sequence[int], optional): List of the number of channels of conv layers in the referece encoder.

  • conv_kernel_size (int, optional) – Kernel size of conv layers in the reference encoder.

  • conv_stride (int, optional) – Stride size of conv layers in the reference encoder.

  • gru_layers (int, optional) – The number of GRU layers in the reference encoder.

  • gru_units (int, optional) – The number of GRU units in the reference encoder.

Initilize global style encoder module.

forward(speech: torch.Tensor) → torch.Tensor[source]

Calculate forward propagation.

Parameters:

speech (Tensor) – Batch of padded target features (B, Lmax, odim).

Returns:

Style token embeddings (B, token_dim).

Return type:

Tensor

class espnet2.tts.gst.style_encoder.StyleTokenLayer(ref_embed_dim: int = 128, gst_tokens: int = 10, gst_token_dim: int = 256, gst_heads: int = 4, dropout_rate: float = 0.0)[source]

Bases: torch.nn.modules.module.Module

Style token layer module.

This module is style token layer introduced in Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis.

Parameters:
  • ref_embed_dim (int, optional) – Dimension of the input reference embedding.

  • gst_tokens (int, optional) – The number of GST embeddings.

  • gst_token_dim (int, optional) – Dimension of each GST embedding.

  • gst_heads (int, optional) – The number of heads in GST multihead attention.

  • dropout_rate (float, optional) – Dropout rate in multi-head attention.

Initilize style token layer module.

forward(ref_embs: torch.Tensor) → torch.Tensor[source]

Calculate forward propagation.

Parameters:

ref_embs (Tensor) – Reference embeddings (B, ref_embed_dim).

Returns:

Style token embeddings (B, gst_token_dim).

Return type:

Tensor

espnet2.tts.gst.__init__

espnet2.tts.prodiff.prodiff

ProDiff related modules for ESPnet2.

class espnet2.tts.prodiff.prodiff.ProDiff(idim: int, odim: int, adim: int = 384, aheads: int = 4, elayers: int = 6, eunits: int = 1536, postnet_layers: int = 0, postnet_chans: int = 512, postnet_filts: int = 5, postnet_dropout_rate: float = 0.5, positionwise_layer_type: str = 'conv1d', positionwise_conv_kernel_size: int = 1, use_scaled_pos_enc: bool = True, use_batch_norm: bool = True, encoder_normalize_before: bool = True, encoder_concat_after: bool = False, reduction_factor: int = 1, encoder_type: str = 'transformer', decoder_type: str = 'diffusion', transformer_enc_dropout_rate: float = 0.1, transformer_enc_positional_dropout_rate: float = 0.1, transformer_enc_attn_dropout_rate: float = 0.1, denoiser_layers: int = 20, denoiser_channels: int = 256, diffusion_steps: int = 1000, diffusion_timescale: int = 1, diffusion_beta: float = 40.0, diffusion_scheduler: str = 'vpsde', diffusion_cycle_ln: int = 1, conformer_rel_pos_type: str = 'legacy', conformer_pos_enc_layer_type: str = 'rel_pos', conformer_self_attn_layer_type: str = 'rel_selfattn', conformer_activation_type: str = 'swish', use_macaron_style_in_conformer: bool = True, use_cnn_in_conformer: bool = True, zero_triu: bool = False, conformer_enc_kernel_size: int = 7, duration_predictor_layers: int = 2, duration_predictor_chans: int = 384, duration_predictor_kernel_size: int = 3, duration_predictor_dropout_rate: float = 0.1, energy_predictor_layers: int = 2, energy_predictor_chans: int = 384, energy_predictor_kernel_size: int = 3, energy_predictor_dropout: float = 0.5, energy_embed_kernel_size: int = 9, energy_embed_dropout: float = 0.5, stop_gradient_from_energy_predictor: bool = False, pitch_predictor_layers: int = 2, pitch_predictor_chans: int = 384, pitch_predictor_kernel_size: int = 3, pitch_predictor_dropout: float = 0.5, pitch_embed_kernel_size: int = 9, pitch_embed_dropout: float = 0.5, stop_gradient_from_pitch_predictor: bool = False, spks: Optional[int] = None, langs: Optional[int] = None, spk_embed_dim: Optional[int] = None, spk_embed_integration_type: str = 'add', use_gst: bool = False, gst_tokens: int = 10, gst_heads: int = 4, gst_conv_layers: int = 6, gst_conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), gst_conv_kernel_size: int = 3, gst_conv_stride: int = 2, gst_gru_layers: int = 1, gst_gru_units: int = 128, init_type: str = 'xavier_uniform', init_enc_alpha: float = 1.0, init_dec_alpha: float = 1.0, use_masking: bool = False, use_weighted_masking: bool = False)[source]

Bases: espnet2.tts.abs_tts.AbsTTS

ProDiff module.

This is a module of ProDiff described in ProDiff: Progressive Fast Diffusion Model for High-Quality Text-to-Speech.

Initialize ProDiff module.

Parameters:
  • idim (int) – Dimension of the inputs.

  • odim (int) – Dimension of the outputs.

  • elayers (int) – Number of encoder layers.

  • eunits (int) – Number of encoder hidden units.

  • dlayers (int) – Number of decoder layers.

  • dunits (int) – Number of decoder hidden units.

  • postnet_layers (int) – Number of postnet layers.

  • postnet_chans (int) – Number of postnet channels.

  • postnet_filts (int) – Kernel size of postnet.

  • postnet_dropout_rate (float) – Dropout rate in postnet.

  • use_scaled_pos_enc (bool) – Whether to use trainable scaled pos encoding.

  • use_batch_norm (bool) – Whether to use batch normalization in encoder prenet.

  • encoder_normalize_before (bool) – Whether to apply layernorm layer before encoder block.

  • decoder_normalize_before (bool) – Whether to apply layernorm layer before decoder block.

  • encoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in encoder.

  • decoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in decoder.

  • reduction_factor (int) – Reduction factor.

  • encoder_type (str) – Encoder type (“transformer” or “conformer”).

  • decoder_type (str) – Decoder type (“transformer” or “conformer”).

  • transformer_enc_dropout_rate (float) – Dropout rate in encoder except attention and positional encoding.

  • transformer_enc_positional_dropout_rate (float) – Dropout rate after encoder positional encoding.

  • transformer_enc_attn_dropout_rate (float) – Dropout rate in encoder self-attention module.

  • transformer_dec_dropout_rate (float) – Dropout rate in decoder except attention & positional encoding.

  • transformer_dec_positional_dropout_rate (float) – Dropout rate after decoder positional encoding.

  • transformer_dec_attn_dropout_rate (float) – Dropout rate in decoder self-attention module.

  • conformer_rel_pos_type (str) – Relative pos encoding type in conformer.

  • conformer_pos_enc_layer_type (str) – Pos encoding layer type in conformer.

  • conformer_self_attn_layer_type (str) – Self-attention layer type in conformer

  • conformer_activation_type (str) – Activation function type in conformer.

  • use_macaron_style_in_conformer – Whether to use macaron style FFN.

  • use_cnn_in_conformer – Whether to use CNN in conformer.

  • zero_triu – Whether to use zero triu in relative self-attention module.

  • conformer_enc_kernel_size – Kernel size of encoder conformer.

  • conformer_dec_kernel_size – Kernel size of decoder conformer.

  • duration_predictor_layers (int) – Number of duration predictor layers.

  • duration_predictor_chans (int) – Number of duration predictor channels.

  • duration_predictor_kernel_size (int) – Kernel size of duration predictor.

  • duration_predictor_dropout_rate (float) – Dropout rate in duration predictor.

  • pitch_predictor_layers (int) – Number of pitch predictor layers.

  • pitch_predictor_chans (int) – Number of pitch predictor channels.

  • pitch_predictor_kernel_size (int) – Kernel size of pitch predictor.

  • pitch_predictor_dropout_rate (float) – Dropout rate in pitch predictor.

  • pitch_embed_kernel_size (float) – Kernel size of pitch embedding.

  • pitch_embed_dropout_rate (float) – Dropout rate for pitch embedding.

  • stop_gradient_from_pitch_predictor – Whether to stop gradient from pitch predictor to encoder.

  • energy_predictor_layers (int) – Number of energy predictor layers.

  • energy_predictor_chans (int) – Number of energy predictor channels.

  • energy_predictor_kernel_size (int) – Kernel size of energy predictor.

  • energy_predictor_dropout_rate (float) – Dropout rate in energy predictor.

  • energy_embed_kernel_size (float) – Kernel size of energy embedding.

  • energy_embed_dropout_rate (float) – Dropout rate for energy embedding.

  • stop_gradient_from_energy_predictor – Whether to stop gradient from energy predictor to encoder.

  • spks (Optional[int]) – Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.

  • langs (Optional[int]) – Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.

  • spk_embed_dim (Optional[int]) – Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.

  • spk_embed_integration_type – How to integrate speaker embedding.

  • use_gst (str) – Whether to use global style token.

  • gst_tokens (int) – The number of GST embeddings.

  • gst_heads (int) – The number of heads in GST multihead attention.

  • gst_conv_layers (int) – The number of conv layers in GST.

  • gst_conv_chans_list – (Sequence[int]): List of the number of channels of conv layers in GST.

  • gst_conv_kernel_size (int) – Kernel size of conv layers in GST.

  • gst_conv_stride (int) – Stride size of conv layers in GST.

  • gst_gru_layers (int) – The number of GRU layers in GST.

  • gst_gru_units (int) – The number of GRU units in GST.

  • init_type (str) – How to initialize transformer parameters.

  • init_enc_alpha (float) – Initial value of alpha in scaled pos encoding of the encoder.

  • init_dec_alpha (float) – Initial value of alpha in scaled pos encoding of the decoder.

  • use_masking (bool) – Whether to apply masking for padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to apply weighted masking in loss calculation.

forward(text: torch.Tensor, text_lengths: torch.Tensor, feats: torch.Tensor, feats_lengths: torch.Tensor, durations: torch.Tensor, durations_lengths: torch.Tensor, pitch: torch.Tensor, pitch_lengths: torch.Tensor, energy: torch.Tensor, energy_lengths: torch.Tensor, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, joint_training: bool = False) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • text (LongTensor) – Batch of padded token ids (B, T_text).

  • text_lengths (LongTensor) – Batch of lengths of each input (B,).

  • feats (Tensor) – Batch of padded target features (B, T_feats, odim).

  • feats_lengths (LongTensor) – Batch of the lengths of each target (B,).

  • durations (LongTensor) – Batch of padded durations (B, T_text + 1).

  • durations_lengths (LongTensor) – Batch of duration lengths (B, T_text + 1).

  • pitch (Tensor) – Batch of padded token-averaged pitch (B, T_text + 1, 1).

  • pitch_lengths (LongTensor) – Batch of pitch lengths (B, T_text + 1).

  • energy (Tensor) – Batch of padded token-averaged energy (B, T_text + 1, 1).

  • energy_lengths (LongTensor) – Batch of energy lengths (B, T_text + 1).

  • spembs (Optional[Tensor]) – Batch of speaker embeddings (B, spk_embed_dim).

  • sids (Optional[Tensor]) – Batch of speaker IDs (B, 1).

  • lids (Optional[Tensor]) – Batch of language IDs (B, 1).

  • joint_training (bool) – Whether to perform joint training with vocoder.

Returns:

Loss scalar value. Dict: Statistics to be monitored. Tensor: Weight value if not joint training else model outputs.

Return type:

Tensor

inference(text: torch.Tensor, feats: Optional[torch.Tensor] = None, durations: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, alpha: float = 1.0, use_teacher_forcing: bool = False) → Dict[str, torch.Tensor][source]

Generate the sequence of features given the sequences of characters.

Parameters:
  • text (LongTensor) – Input sequence of characters (T_text,).

  • feats (Optional[Tensor) – Feature sequence to extract style (N, idim).

  • durations (Optional[Tensor) – Groundtruth of duration (T_text + 1,).

  • spembs (Optional[Tensor) – Speaker embedding vector (spk_embed_dim,).

  • sids (Optional[Tensor]) – Speaker ID (1,).

  • lids (Optional[Tensor]) – Language ID (1,).

  • pitch (Optional[Tensor]) – Groundtruth of token-avg pitch (T_text + 1, 1).

  • energy (Optional[Tensor]) – Groundtruth of token-avg energy (T_text + 1, 1).

  • alpha (float) – Alpha to control the speed.

  • use_teacher_forcing (bool) – Whether to use teacher forcing. If true, groundtruth of duration, pitch and energy will be used.

Returns:

Output dict including the following items:
  • feat_gen (Tensor): Output sequence of features (T_feats, odim).

  • duration (Tensor): Duration sequence (T_text + 1,).

  • pitch (Tensor): Pitch sequence (T_text + 1,).

  • energy (Tensor): Energy sequence (T_text + 1,).

Return type:

Dict[str, Tensor]

espnet2.tts.prodiff.denoiser

class espnet2.tts.prodiff.denoiser.Mish(*args, **kwargs)[source]

Bases: torch.nn.modules.module.Module

Mish Activation Function.

Introduced in `Mish: A Self Regularized Non-Monotonic Activation Function`_.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor) → torch.Tensor[source]

Calculate forward propagation.

Parameters:

x (torch.Tensor) – Input tensor.

Returns:

Output tensor.

Return type:

torch.Tensor

class espnet2.tts.prodiff.denoiser.ResidualBlock(adim: int, channels: int, dilation: int)[source]

Bases: torch.nn.modules.module.Module

Residual Block for Diffusion Denoiser.

Initialization.

Parameters:
  • adim (int) – Size of dimensions.

  • channels (int) – Number of channels.

  • dilation (int) – Size of dilations.

forward(x: torch.Tensor, condition: torch.Tensor, step: torch.Tensor) → torch.Tensor[source]

Calculate forward propagation.

Parameters:
  • x (torch.Tensor) – Input tensor.

  • condition (torch.Tensor) – Conditioning tensor.

  • step (torch.Tensor) – Number of diffusion step.

Returns:

Output tensor.

Return type:

Union[torch.Tensor, torch.Tensor]

class espnet2.tts.prodiff.denoiser.SpectogramDenoiser(idim: int, adim: int = 256, layers: int = 20, channels: int = 256, cycle_length: int = 1, timesteps: int = 200, timescale: int = 1, max_beta: float = 40.0, scheduler: str = 'vpsde', dropout_rate: float = 0.05)[source]

Bases: torch.nn.modules.module.Module

Spectogram Denoiser.

Ref: https://arxiv.org/pdf/2207.06389.pdf.

Initialization.

Parameters:
  • idim (int) – Dimension of the inputs.

  • adim (int, optional) – Dimension of the hidden states. Defaults to 256.

  • layers (int, optional) – Number of layers. Defaults to 20.

  • channels (int, optional) – Number of channels of each layer. Defaults to 256.

  • cycle_length (int, optional) – Cycle length of the diffusion. Defaults to 1.

  • timesteps (int, optional) – Number of timesteps of the diffusion. Defaults to 200.

  • timescale (int, optional) – Number of timescale. Defaults to 1.

  • max_beta (float, optional) – Maximum beta value for schedueler. Defaults to 40.

  • scheduler (str, optional) – Type of noise scheduler. Defaults to “vpsde”.

  • dropout_rate (float, optional) – Dropout rate. Defaults to 0.05.

diffusion(xs_ref: torch.Tensor, steps: torch.Tensor, noise: Optional[torch.Tensor] = None) → torch.Tensor[source]

Calculate diffusion process during training.

Parameters:
  • xs_ref (torch.Tensor) – Input tensor.

  • steps (torch.Tensor) – Number of step.

  • noise (Optional[torch.Tensor], optional) – Noise tensor. Defaults to None.

Returns:

Output tensor.

Return type:

torch.Tensor

forward(xs: torch.Tensor, ys: Optional[torch.Tensor] = None, masks: Optional[torch.Tensor] = None, is_inference: bool = False) → torch.Tensor[source]

Calculate forward propagation.

Parameters:
  • xs (torch.Tensor) – Phoneme-encoded tensor (#batch, time, dims)

  • ys (Optional[torch.Tensor], optional) – Mel-based reference tensor (#batch, time, mels). Defaults to None.

  • masks (Optional[torch.Tensor], optional) – Mask tensor (#batch, time). Defaults to None.

Returns:

Output tensor (#batch, time, dims).

Return type:

torch.Tensor

forward_denoise(xs_noisy: torch.Tensor, step: torch.Tensor, condition: torch.Tensor) → torch.Tensor[source]

Calculate forward for denoising diffusion.

Parameters:
  • xs_noisy (torch.Tensor) – Input tensor.

  • step (torch.Tensor) – Number of step.

  • condition (torch.Tensor) – Conditioning tensor.

Returns:

Denoised tensor.

Return type:

torch.Tensor

inference(condition: torch.Tensor) → torch.Tensor[source]

Calculate forward during inference.

Parameters:

condition (torch.Tensor) – Conditioning tensor (batch, time, dims).

Returns:

Output tensor.

Return type:

torch.Tensor

espnet2.tts.prodiff.denoiser.noise_scheduler(sched_type: str, timesteps: int, min_beta: float = 0.0, max_beta: float = 0.01, s: float = 0.008) → torch.Tensor[source]

Noise Scheduler.

Parameters:
  • sched_type (str) – type of scheduler.

  • timesteps (int) – numbern of time steps.

  • min_beta (float, optional) – Minimum beta. Defaults to 0.0.

  • max_beta (float, optional) – Maximum beta. Defaults to 0.01.

  • s (float, optional) – Scheduler intersection. Defaults to 0.008.

Returns:

Noise.

Return type:

tensor

espnet2.tts.prodiff.loss

ProDiff related loss module for ESPnet2.

class espnet2.tts.prodiff.loss.ProDiffLoss(use_masking: bool = True, use_weighted_masking: bool = False)[source]

Bases: torch.nn.modules.module.Module

Loss function module for ProDiffLoss.

Initialize feed-forward Transformer loss module.

Parameters:
  • use_masking (bool) – Whether to apply masking for padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to weighted masking in loss calculation.

forward(after_outs: torch.Tensor, before_outs: torch.Tensor, d_outs: torch.Tensor, p_outs: torch.Tensor, e_outs: torch.Tensor, ys: torch.Tensor, ds: torch.Tensor, ps: torch.Tensor, es: torch.Tensor, ilens: torch.Tensor, olens: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • after_outs (Tensor) – Batch of outputs after postnets (B, T_feats, odim).

  • before_outs (Tensor) – Batch of outputs before postnets (B, T_feats, odim).

  • d_outs (LongTensor) – Batch of outputs of duration predictor (B, T_text).

  • p_outs (Tensor) – Batch of outputs of pitch predictor (B, T_text, 1).

  • e_outs (Tensor) – Batch of outputs of energy predictor (B, T_text, 1).

  • ys (Tensor) – Batch of target features (B, T_feats, odim).

  • ds (LongTensor) – Batch of durations (B, T_text).

  • ps (Tensor) – Batch of target token-averaged pitch (B, T_text, 1).

  • es (Tensor) – Batch of target token-averaged energy (B, T_text, 1).

  • ilens (LongTensor) – Batch of the lengths of each input (B,).

  • olens (LongTensor) – Batch of the lengths of each target (B,).

Returns:

L1 loss value. Tensor: Duration predictor loss value. Tensor: Pitch predictor loss value. Tensor: Energy predictor loss value.

Return type:

Tensor

class espnet2.tts.prodiff.loss.SSimLoss(bias: float = 6.0, window_size: int = 11, channels: int = 1, reduction: str = 'none')[source]

Bases: torch.nn.modules.module.Module

SSimLoss.

This is an implementation of structural similarity (SSIM) loss. This code is modified from https://github.com/Po-Hsun-Su/pytorch-ssim.

Initialization.

Parameters:
  • bias (float, optional) – value of the bias. Defaults to 6.0.

  • window_size (int, optional) – Window size. Defaults to 11.

  • channels (int, optional) – Number of channels. Defaults to 1.

  • reduction (str, optional) – Type of reduction during the loss calculation. Defaults to “none”.

forward(outputs: torch.Tensor, target: torch.Tensor)[source]

Calculate forward propagation.

Parameters:
  • outputs (torch.Tensor) – Batch of output sequences generated by the model (batch, time, mels).

  • target (torch.Tensor) – Batch of sequences with true states (batch, time, mels).

Returns:

Loss scalar value.

Return type:

Tensor

ssim(tensor1: torch.Tensor, tensor2: torch.Tensor)[source]

Calculate SSIM loss.

Parameters:
  • tensor1 (torch.Tensor) – Generated output.

  • tensor2 (torch.Tensor) – Groundtruth output.

Returns:

Loss scalar value.

Return type:

Tensor

espnet2.tts.prodiff.loss.gaussian(window_size: int, sigma: float) → torch.Tensor[source]

Gaussian Noise.

Parameters:
  • window_size (int) – Window size.

  • sigma (float) – Noise sigma.

Returns:

Noise.

Return type:

torch.Tensor

espnet2.tts.prodiff.__init__