espnet2.tts2.fastspeech2.fastspeech2_discrete.FastSpeech2Discrete
espnet2.tts2.fastspeech2.fastspeech2_discrete.FastSpeech2Discrete
class espnet2.tts2.fastspeech2.fastspeech2_discrete.FastSpeech2Discrete(idim: int, odim: int, adim: int = 384, aheads: int = 4, elayers: int = 6, eunits: int = 1536, dlayers: int = 6, dunits: int = 1536, postnet_layers: int = 5, postnet_chans: int = 512, postnet_filts: int = 5, postnet_dropout_rate: float = 0.5, positionwise_layer_type: str = 'conv1d', positionwise_conv_kernel_size: int = 1, use_scaled_pos_enc: bool = True, use_batch_norm: bool = True, encoder_normalize_before: bool = True, decoder_normalize_before: bool = True, encoder_concat_after: bool = False, decoder_concat_after: bool = False, reduction_factor: int = 1, encoder_type: str = 'transformer', decoder_type: str = 'transformer', transformer_enc_dropout_rate: float = 0.1, transformer_enc_positional_dropout_rate: float = 0.1, transformer_enc_attn_dropout_rate: float = 0.1, transformer_dec_dropout_rate: float = 0.1, transformer_dec_positional_dropout_rate: float = 0.1, transformer_dec_attn_dropout_rate: float = 0.1, conformer_rel_pos_type: str = 'legacy', conformer_pos_enc_layer_type: str = 'rel_pos', conformer_self_attn_layer_type: str = 'rel_selfattn', conformer_activation_type: str = 'swish', use_macaron_style_in_conformer: bool = True, use_cnn_in_conformer: bool = True, zero_triu: bool = False, conformer_enc_kernel_size: int = 7, conformer_dec_kernel_size: int = 31, duration_predictor_layers: int = 2, duration_predictor_chans: int = 384, duration_predictor_kernel_size: int = 3, duration_predictor_dropout_rate: float = 0.1, energy_predictor_layers: int = 2, energy_predictor_chans: int = 384, energy_predictor_kernel_size: int = 3, energy_predictor_dropout: float = 0.5, energy_embed_kernel_size: int = 9, energy_embed_dropout: float = 0.5, stop_gradient_from_energy_predictor: bool = False, pitch_predictor_layers: int = 2, pitch_predictor_chans: int = 384, pitch_predictor_kernel_size: int = 3, pitch_predictor_dropout: float = 0.5, pitch_embed_kernel_size: int = 9, pitch_embed_dropout: float = 0.5, stop_gradient_from_pitch_predictor: bool = False, spks: int | None = None, langs: int | None = None, spk_embed_dim: int | None = None, spk_embed_integration_type: str = 'add', init_type: str = 'xavier_uniform', init_enc_alpha: float = 1.0, init_dec_alpha: float = 1.0, use_masking: bool = False, use_weighted_masking: bool = False, ignore_id: int = 0, discrete_token_layers: int = 1)
Bases: AbsTTS2
FastSpeech2 module with discrete output.
This is a module of discrete-output Fastspeech2: it uses the same Fastspeech2 architecture as tts1, but with discrete token as output.
Initialize FastSpeech2 module.
- Parameters:
- idim (int) – Dimension of the inputs.
 - odim (int) – Dimension of the outputs.
 - elayers (int) – Number of encoder layers.
 - eunits (int) – Number of encoder hidden units.
 - dlayers (int) – Number of decoder layers.
 - dunits (int) – Number of decoder hidden units.
 - postnet_layers (int) – Number of postnet layers.
 - postnet_chans (int) – Number of postnet channels.
 - postnet_filts (int) – Kernel size of postnet.
 - postnet_dropout_rate (float) – Dropout rate in postnet.
 - use_scaled_pos_enc (bool) – Whether to use trainable scaled pos encoding.
 - use_batch_norm (bool) – Whether to use batch normalization in encoder prenet.
 - encoder_normalize_before (bool) – Whether to apply layernorm layer before encoder block.
 - decoder_normalize_before (bool) – Whether to apply layernorm layer before decoder block.
 - encoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in encoder.
 - decoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in decoder.
 - reduction_factor (int) – Reduction factor.
 - encoder_type (str) – Encoder type (“transformer” or “conformer”).
 - decoder_type (str) – Decoder type (“transformer” or “conformer”).
 - transformer_enc_dropout_rate (float) – Dropout rate in encoder except attention and positional encoding.
 - transformer_enc_positional_dropout_rate (float) – Dropout rate after encoder positional encoding.
 - transformer_enc_attn_dropout_rate (float) – Dropout rate in encoder self-attention module.
 - transformer_dec_dropout_rate (float) – Dropout rate in decoder except attention & positional encoding.
 - transformer_dec_positional_dropout_rate (float) – Dropout rate after decoder positional encoding.
 - transformer_dec_attn_dropout_rate (float) – Dropout rate in decoder self-attention module.
 - conformer_rel_pos_type (str) – Relative pos encoding type in conformer.
 - conformer_pos_enc_layer_type (str) – Pos encoding layer type in conformer.
 - conformer_self_attn_layer_type (str) – Self-attention layer type in conformer
 - conformer_activation_type (str) – Activation function type in conformer.
 - use_macaron_style_in_conformer – Whether to use macaron style FFN.
 - use_cnn_in_conformer – Whether to use CNN in conformer.
 - zero_triu – Whether to use zero triu in relative self-attention module.
 - conformer_enc_kernel_size – Kernel size of encoder conformer.
 - conformer_dec_kernel_size – Kernel size of decoder conformer.
 - duration_predictor_layers (int) – Number of duration predictor layers.
 - duration_predictor_chans (int) – Number of duration predictor channels.
 - duration_predictor_kernel_size (int) – Kernel size of duration predictor.
 - duration_predictor_dropout_rate (float) – Dropout rate in duration predictor.
 - pitch_predictor_layers (int) – Number of pitch predictor layers.
 - pitch_predictor_chans (int) – Number of pitch predictor channels.
 - pitch_predictor_kernel_size (int) – Kernel size of pitch predictor.
 - pitch_predictor_dropout_rate (float) – Dropout rate in pitch predictor.
 - pitch_embed_kernel_size (float) – Kernel size of pitch embedding.
 - pitch_embed_dropout_rate (float) – Dropout rate for pitch embedding.
 - stop_gradient_from_pitch_predictor – Whether to stop gradient from pitch predictor to encoder.
 - energy_predictor_layers (int) – Number of energy predictor layers.
 - energy_predictor_chans (int) – Number of energy predictor channels.
 - energy_predictor_kernel_size (int) – Kernel size of energy predictor.
 - energy_predictor_dropout_rate (float) – Dropout rate in energy predictor.
 - energy_embed_kernel_size (float) – Kernel size of energy embedding.
 - energy_embed_dropout_rate (float) – Dropout rate for energy embedding.
 - stop_gradient_from_energy_predictor – Whether to stop gradient from energy predictor to encoder.
 - spks (Optional *[*int ]) – Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.
 - langs (Optional *[*int ]) – Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.
 - spk_embed_dim (Optional *[*int ]) – Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.
 - spk_embed_integration_type – How to integrate speaker embedding.
 - init_type (str) – How to initialize transformer parameters.
 - init_enc_alpha (float) – Initial value of alpha in scaled pos encoding of the encoder.
 - init_dec_alpha (float) – Initial value of alpha in scaled pos encoding of the decoder.
 - use_masking (bool) – Whether to apply masking for padded part in loss calculation.
 - use_weighted_masking (bool) – Whether to apply weighted masking in loss calculation.
 
 
forward(text: Tensor, text_lengths: Tensor, discrete_feats: Tensor, discrete_feats_lengths: Tensor, durations: Tensor, durations_lengths: Tensor, pitch: Tensor, pitch_lengths: Tensor, energy: Tensor, energy_lengths: Tensor, spembs: Tensor | None = None, sids: Tensor | None = None, lids: Tensor | None = None, joint_training: bool = False) → Tuple[Tensor, Dict[str, Tensor], Tensor]
Calculate forward propagation.
- Parameters:
- text (LongTensor) – Batch of padded token ids (B, T_text).
 - text_lengths (LongTensor) – Batch of lengths of each input (B,).
 - discrete_feats (Tensor) – Discrete speech tensor (B, T_token).
 - discrete_feats_lengths (LongTensor) – Discrete speech length tensor (B,).
 - durations (LongTensor) – Batch of padded durations (B, T_text + 1).
 - durations_lengths (LongTensor) – Batch of duration lengths (B, T_text + 1).
 - pitch (Tensor) – Batch of padded token-averaged pitch (B, T_text + 1, 1).
 - pitch_lengths (LongTensor) – Batch of pitch lengths (B, T_text + 1).
 - energy (Tensor) – Batch of padded token-averaged energy (B, T_text + 1, 1).
 - energy_lengths (LongTensor) – Batch of energy lengths (B, T_text + 1).
 - spembs (Optional *[*Tensor ]) – Batch of speaker embeddings (B, spk_embed_dim).
 - sids (Optional *[*Tensor ]) – Batch of speaker IDs (B, 1).
 - lids (Optional *[*Tensor ]) – Batch of language IDs (B, 1).
 - joint_training (bool) – Whether to perform joint training with vocoder.
 
 - Returns: Loss scalar value. Dict: Statistics to be monitored. Tensor: Weight value if not joint training else model outputs.
 - Return type: Tensor
 
inference(text: Tensor, durations: Tensor | None = None, spembs: Tensor | None = None, sids: Tensor | None = None, lids: Tensor | None = None, pitch: Tensor | None = None, energy: Tensor | None = None, alpha: float = 1.0, use_teacher_forcing: bool = False) → Dict[str, Tensor]
Generate the sequence of features given the sequences of characters.
- Parameters:
- text (LongTensor) – Input sequence of characters (T_text,).
 - durations (Optional *[*Tensor) – Groundtruth of duration (T_text + 1,).
 - spembs (Optional *[*Tensor) – Speaker embedding vector (spk_embed_dim,).
 - sids (Optional *[*Tensor ]) – Speaker ID (1,).
 - lids (Optional *[*Tensor ]) – Language ID (1,).
 - pitch (Optional *[*Tensor ]) – Groundtruth of token-avg pitch (T_text + 1, 1).
 - energy (Optional *[*Tensor ]) – Groundtruth of token-avg energy (T_text + 1, 1).
 - alpha (float) – Alpha to control the speed.
 - use_teacher_forcing (bool) – Whether to use teacher forcing. If true, groundtruth of duration, pitch and energy will be used.
 
 - Returns: Output dict including the following items: : * feat_gen (Tensor): Output sequence of features (T_feats, odim). 
- duration (Tensor): Duration sequence (T_text + 1,).
 - pitch (Tensor): Pitch sequence (T_text + 1,).
 - energy (Tensor): Energy sequence (T_text + 1,).
 
 - Return type: Dict[str, Tensor]
 
