espnet2.tts2 package

espnet2.tts2.espnet_model

Text-to-speech ESPnet model.

class espnet2.tts2.espnet_model.ESPnetTTS2Model(discrete_feats_extract: espnet2.tts2.feats_extract.abs_feats_extract.AbsFeatsExtractDiscrete, pitch_extract: Optional[espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract], energy_extract: Optional[espnet2.tts.feats_extract.abs_feats_extract.AbsFeatsExtract], pitch_normalize: Optional[espnet2.layers.inversible_interface.InversibleInterface], energy_normalize: Optional[espnet2.layers.inversible_interface.InversibleInterface], tts: espnet2.tts2.abs_tts2.AbsTTS2)[source]

Bases: espnet2.train.abs_espnet_model.AbsESPnetModel

ESPnet model for text-to-speech task.

Initialize ESPnetTTSModel module.

collect_feats(text: torch.Tensor, text_lengths: torch.Tensor, discrete_speech: torch.Tensor, discrete_speech_lengths: torch.Tensor, speech: torch.Tensor, speech_lengths: torch.Tensor, durations: Optional[torch.Tensor] = None, durations_lengths: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, pitch_lengths: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, energy_lengths: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, **kwargs) → Dict[str, torch.Tensor][source]

Caclualte features and return them as a dict.

Parameters:
  • text (Tensor) – Text index tensor (B, T_text).

  • text_lengths (Tensor) – Text length tensor (B,).

  • speech (Tensor) – Speech waveform tensor (B, T_wav).

  • speech_lengths (Tensor) – Speech length tensor (B,).

  • discrete_speech (Tensor) – Discrete speech tensor (B, T_token).

  • discrete_speech_lengths (Tensor) – Discrete speech length tensor (B,).

  • durations (Optional[Tensor) – Duration tensor.

  • durations_lengths (Optional[Tensor) – Duration length tensor (B,).

  • pitch (Optional[Tensor) – Pitch tensor.

  • pitch_lengths (Optional[Tensor) – Pitch length tensor (B,).

  • energy (Optional[Tensor) – Energy tensor.

  • energy_lengths (Optional[Tensor) – Energy length tensor (B,).

  • spembs (Optional[Tensor]) – Speaker embedding tensor (B, D).

  • sids (Optional[Tensor]) – Speaker ID tensor (B, 1).

  • lids (Optional[Tensor]) – Language ID tensor (B, 1).

Returns:

Dict of features.

Return type:

Dict[str, Tensor]

forward(text: torch.Tensor, text_lengths: torch.Tensor, discrete_speech: torch.Tensor, discrete_speech_lengths: torch.Tensor, speech: torch.Tensor, speech_lengths: torch.Tensor, durations: Optional[torch.Tensor] = None, durations_lengths: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, pitch_lengths: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, energy_lengths: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, **kwargs) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Caclualte outputs and return the loss tensor.

Parameters:
  • text (Tensor) – Text index tensor (B, T_text).

  • text_lengths (Tensor) – Text length tensor (B,).

  • speech (Tensor) – Speech waveform tensor (B, T_wav).

  • speech_lengths (Tensor) – Speech length tensor (B,).

  • discrete_speech (Tensor) – Discrete speech tensor (B, T_token).

  • discrete_speech_lengths (Tensor) – Discrete speech length tensor (B,).

  • duration (Optional[Tensor]) – Duration tensor.

  • duration_lengths (Optional[Tensor]) – Duration length tensor (B,).

  • pitch (Optional[Tensor]) – Pitch tensor.

  • pitch_lengths (Optional[Tensor]) – Pitch length tensor (B,).

  • energy (Optional[Tensor]) – Energy tensor.

  • energy_lengths (Optional[Tensor]) – Energy length tensor (B,).

  • spembs (Optional[Tensor]) – Speaker embedding tensor (B, D).

  • sids (Optional[Tensor]) – Speaker ID tensor (B, 1).

  • lids (Optional[Tensor]) – Language ID tensor (B, 1).

  • kwargs – “utt_id” is among the input.

Returns:

Loss scalar tensor. Dict[str, float]: Statistics to be monitored. Tensor: Weight tensor to summarize losses.

Return type:

Tensor

inference(text: torch.Tensor, speech: Optional[torch.Tensor] = None, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, durations: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, **decode_config) → Dict[str, torch.Tensor][source]

Caclualte features and return them as a dict.

Parameters:
  • text (Tensor) – Text index tensor (T_text).

  • speech (Tensor) – Speech waveform tensor (T_wav).

  • spembs (Optional[Tensor]) – Speaker embedding tensor (D,).

  • sids (Optional[Tensor]) – Speaker ID tensor (1,).

  • lids (Optional[Tensor]) – Language ID tensor (1,).

  • durations (Optional[Tensor) – Duration tensor.

  • pitch (Optional[Tensor) – Pitch tensor.

  • energy (Optional[Tensor) – Energy tensor.

Returns:

Dict of outputs.

Return type:

Dict[str, Tensor]

espnet2.tts2.abs_tts2

Text-to-speech abstrast class.

class espnet2.tts2.abs_tts2.AbsTTS2(*args, **kwargs)[source]

Bases: torch.nn.modules.module.Module, abc.ABC

TTS2 (Discrete Unit-Based TTS) abstract class.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

abstract forward(text: torch.Tensor, text_lengths: torch.Tensor, feats: torch.Tensor, feats_lengths: torch.Tensor, **kwargs) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Calculate outputs and return the loss tensor.

abstract inference(text: torch.Tensor, **kwargs) → Dict[str, torch.Tensor][source]

Return predicted output as a dict.

property require_raw_speech

Return whether or not raw_speech is required.

property require_vocoder

Return whether or not vocoder is required.

espnet2.tts2.__init__

espnet2.tts2.feats_extract.identity

class espnet2.tts2.feats_extract.identity.IdentityFeatureExtract[source]

Bases: espnet2.tts2.feats_extract.abs_feats_extract.AbsFeatsExtractDiscrete

Keep the input discrete sequence as-is

forward(input: torch.Tensor, input_lengths: torch.Tensor) → Tuple[Any, Dict][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

espnet2.tts2.feats_extract.abs_feats_extract

class espnet2.tts2.feats_extract.abs_feats_extract.AbsFeatsExtractDiscrete(*args, **kwargs)[source]

Bases: torch.nn.modules.module.Module, abc.ABC

Parse the discrete token sequence into structured data format for predicting. E.g., (1) keep as sequence (2) resize as a matrix (3) multi-resolution …

Initializes internal Module state, shared by both nn.Module and ScriptModule.

abstract forward(input: torch.Tensor, input_lengths: torch.Tensor) → Tuple[Any, Dict][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

espnet2.tts2.feats_extract.__init__

espnet2.tts2.fastspeech2.loss

Fastspeech2 related loss module for ESPnet2. Speech Target are discrete units

class espnet2.tts2.fastspeech2.loss.FastSpeech2LossDiscrete(use_masking: bool = True, use_weighted_masking: bool = False, ignore_id: int = -1)[source]

Bases: torch.nn.modules.module.Module

Loss function module for FastSpeech2.

Initialize feed-forward Transformer loss module.

Parameters:
  • use_masking (bool) – Whether to apply masking for padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to weighted masking in loss calculation.

forward(after_outs: torch.Tensor, before_outs: torch.Tensor, d_outs: torch.Tensor, p_outs: torch.Tensor, e_outs: torch.Tensor, ys: torch.Tensor, ds: torch.Tensor, ps: torch.Tensor, es: torch.Tensor, ilens: torch.Tensor, olens: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • after_outs (Tensor) – Batch of outputs after postnets (B, T_feats, odim).

  • before_outs (Tensor) – Batch of outputs before postnets (B, T_feats, odim).

  • d_outs (LongTensor) – Batch of outputs of duration predictor (B, T_text).

  • p_outs (Tensor) – Batch of outputs of pitch predictor (B, T_text, 1).

  • e_outs (Tensor) – Batch of outputs of energy predictor (B, T_text, 1).

  • ys (Tensor) – Batch of target features in discrete space (B, T_feats).

  • ds (LongTensor) – Batch of durations (B, T_text).

  • ps (Tensor) – Batch of target token-averaged pitch (B, T_text, 1).

  • es (Tensor) – Batch of target token-averaged energy (B, T_text, 1).

  • ilens (LongTensor) – Batch of the lengths of each input (B,).

  • olens (LongTensor) – Batch of the lengths of each target (B,).

Returns:

CrossEntropy loss value. Tensor: Duration predictor loss value. Tensor: Pitch predictor loss value. Tensor: Energy predictor loss value.

Return type:

Tensor

espnet2.tts2.fastspeech2.__init__

espnet2.tts2.fastspeech2.fastspeech2_discrete

Fastspeech2 related modules for ESPnet2.

class espnet2.tts2.fastspeech2.fastspeech2_discrete.FastSpeech2Discrete(idim: int, odim: int, adim: int = 384, aheads: int = 4, elayers: int = 6, eunits: int = 1536, dlayers: int = 6, dunits: int = 1536, postnet_layers: int = 5, postnet_chans: int = 512, postnet_filts: int = 5, postnet_dropout_rate: float = 0.5, positionwise_layer_type: str = 'conv1d', positionwise_conv_kernel_size: int = 1, use_scaled_pos_enc: bool = True, use_batch_norm: bool = True, encoder_normalize_before: bool = True, decoder_normalize_before: bool = True, encoder_concat_after: bool = False, decoder_concat_after: bool = False, reduction_factor: int = 1, encoder_type: str = 'transformer', decoder_type: str = 'transformer', transformer_enc_dropout_rate: float = 0.1, transformer_enc_positional_dropout_rate: float = 0.1, transformer_enc_attn_dropout_rate: float = 0.1, transformer_dec_dropout_rate: float = 0.1, transformer_dec_positional_dropout_rate: float = 0.1, transformer_dec_attn_dropout_rate: float = 0.1, conformer_rel_pos_type: str = 'legacy', conformer_pos_enc_layer_type: str = 'rel_pos', conformer_self_attn_layer_type: str = 'rel_selfattn', conformer_activation_type: str = 'swish', use_macaron_style_in_conformer: bool = True, use_cnn_in_conformer: bool = True, zero_triu: bool = False, conformer_enc_kernel_size: int = 7, conformer_dec_kernel_size: int = 31, duration_predictor_layers: int = 2, duration_predictor_chans: int = 384, duration_predictor_kernel_size: int = 3, duration_predictor_dropout_rate: float = 0.1, energy_predictor_layers: int = 2, energy_predictor_chans: int = 384, energy_predictor_kernel_size: int = 3, energy_predictor_dropout: float = 0.5, energy_embed_kernel_size: int = 9, energy_embed_dropout: float = 0.5, stop_gradient_from_energy_predictor: bool = False, pitch_predictor_layers: int = 2, pitch_predictor_chans: int = 384, pitch_predictor_kernel_size: int = 3, pitch_predictor_dropout: float = 0.5, pitch_embed_kernel_size: int = 9, pitch_embed_dropout: float = 0.5, stop_gradient_from_pitch_predictor: bool = False, spks: Optional[int] = None, langs: Optional[int] = None, spk_embed_dim: Optional[int] = None, spk_embed_integration_type: str = 'add', init_type: str = 'xavier_uniform', init_enc_alpha: float = 1.0, init_dec_alpha: float = 1.0, use_masking: bool = False, use_weighted_masking: bool = False, ignore_id: int = 0)[source]

Bases: espnet2.tts2.abs_tts2.AbsTTS2

FastSpeech2 module with discrete output.

This is a module of discrete-output Fastspeech2: it uses the same Fastspeech2 architecture as tts1, but with discrete token as output.

Initialize FastSpeech2 module.

Parameters:
  • idim (int) – Dimension of the inputs.

  • odim (int) – Dimension of the outputs.

  • elayers (int) – Number of encoder layers.

  • eunits (int) – Number of encoder hidden units.

  • dlayers (int) – Number of decoder layers.

  • dunits (int) – Number of decoder hidden units.

  • postnet_layers (int) – Number of postnet layers.

  • postnet_chans (int) – Number of postnet channels.

  • postnet_filts (int) – Kernel size of postnet.

  • postnet_dropout_rate (float) – Dropout rate in postnet.

  • use_scaled_pos_enc (bool) – Whether to use trainable scaled pos encoding.

  • use_batch_norm (bool) – Whether to use batch normalization in encoder prenet.

  • encoder_normalize_before (bool) – Whether to apply layernorm layer before encoder block.

  • decoder_normalize_before (bool) – Whether to apply layernorm layer before decoder block.

  • encoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in encoder.

  • decoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in decoder.

  • reduction_factor (int) – Reduction factor.

  • encoder_type (str) – Encoder type (“transformer” or “conformer”).

  • decoder_type (str) – Decoder type (“transformer” or “conformer”).

  • transformer_enc_dropout_rate (float) – Dropout rate in encoder except attention and positional encoding.

  • transformer_enc_positional_dropout_rate (float) – Dropout rate after encoder positional encoding.

  • transformer_enc_attn_dropout_rate (float) – Dropout rate in encoder self-attention module.

  • transformer_dec_dropout_rate (float) – Dropout rate in decoder except attention & positional encoding.

  • transformer_dec_positional_dropout_rate (float) – Dropout rate after decoder positional encoding.

  • transformer_dec_attn_dropout_rate (float) – Dropout rate in decoder self-attention module.

  • conformer_rel_pos_type (str) – Relative pos encoding type in conformer.

  • conformer_pos_enc_layer_type (str) – Pos encoding layer type in conformer.

  • conformer_self_attn_layer_type (str) – Self-attention layer type in conformer

  • conformer_activation_type (str) – Activation function type in conformer.

  • use_macaron_style_in_conformer – Whether to use macaron style FFN.

  • use_cnn_in_conformer – Whether to use CNN in conformer.

  • zero_triu – Whether to use zero triu in relative self-attention module.

  • conformer_enc_kernel_size – Kernel size of encoder conformer.

  • conformer_dec_kernel_size – Kernel size of decoder conformer.

  • duration_predictor_layers (int) – Number of duration predictor layers.

  • duration_predictor_chans (int) – Number of duration predictor channels.

  • duration_predictor_kernel_size (int) – Kernel size of duration predictor.

  • duration_predictor_dropout_rate (float) – Dropout rate in duration predictor.

  • pitch_predictor_layers (int) – Number of pitch predictor layers.

  • pitch_predictor_chans (int) – Number of pitch predictor channels.

  • pitch_predictor_kernel_size (int) – Kernel size of pitch predictor.

  • pitch_predictor_dropout_rate (float) – Dropout rate in pitch predictor.

  • pitch_embed_kernel_size (float) – Kernel size of pitch embedding.

  • pitch_embed_dropout_rate (float) – Dropout rate for pitch embedding.

  • stop_gradient_from_pitch_predictor – Whether to stop gradient from pitch predictor to encoder.

  • energy_predictor_layers (int) – Number of energy predictor layers.

  • energy_predictor_chans (int) – Number of energy predictor channels.

  • energy_predictor_kernel_size (int) – Kernel size of energy predictor.

  • energy_predictor_dropout_rate (float) – Dropout rate in energy predictor.

  • energy_embed_kernel_size (float) – Kernel size of energy embedding.

  • energy_embed_dropout_rate (float) – Dropout rate for energy embedding.

  • stop_gradient_from_energy_predictor – Whether to stop gradient from energy predictor to encoder.

  • spks (Optional[int]) – Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.

  • langs (Optional[int]) – Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.

  • spk_embed_dim (Optional[int]) – Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.

  • spk_embed_integration_type – How to integrate speaker embedding.

  • init_type (str) – How to initialize transformer parameters.

  • init_enc_alpha (float) – Initial value of alpha in scaled pos encoding of the encoder.

  • init_dec_alpha (float) – Initial value of alpha in scaled pos encoding of the decoder.

  • use_masking (bool) – Whether to apply masking for padded part in loss calculation.

  • use_weighted_masking (bool) – Whether to apply weighted masking in loss calculation.

forward(text: torch.Tensor, text_lengths: torch.Tensor, discrete_feats: torch.Tensor, discrete_feats_lengths: torch.Tensor, durations: torch.Tensor, durations_lengths: torch.Tensor, pitch: torch.Tensor, pitch_lengths: torch.Tensor, energy: torch.Tensor, energy_lengths: torch.Tensor, spembs: Optional[torch.Tensor] = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, joint_training: bool = False) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]

Calculate forward propagation.

Parameters:
  • text (LongTensor) – Batch of padded token ids (B, T_text).

  • text_lengths (LongTensor) – Batch of lengths of each input (B,).

  • discrete_feats (Tensor) – Discrete speech tensor (B, T_token).

  • discrete_feats_lengths (LongTensor) – Discrete speech length tensor (B,).

  • durations (LongTensor) – Batch of padded durations (B, T_text + 1).

  • durations_lengths (LongTensor) – Batch of duration lengths (B, T_text + 1).

  • pitch (Tensor) – Batch of padded token-averaged pitch (B, T_text + 1, 1).

  • pitch_lengths (LongTensor) – Batch of pitch lengths (B, T_text + 1).

  • energy (Tensor) – Batch of padded token-averaged energy (B, T_text + 1, 1).

  • energy_lengths (LongTensor) – Batch of energy lengths (B, T_text + 1).

  • spembs (Optional[Tensor]) – Batch of speaker embeddings (B, spk_embed_dim).

  • sids (Optional[Tensor]) – Batch of speaker IDs (B, 1).

  • lids (Optional[Tensor]) – Batch of language IDs (B, 1).

  • joint_training (bool) – Whether to perform joint training with vocoder.

Returns:

Loss scalar value. Dict: Statistics to be monitored. Tensor: Weight value if not joint training else model outputs.

Return type:

Tensor

inference(text: torch.Tensor, durations: Optional[torch.Tensor] = None, spembs: torch.Tensor = None, sids: Optional[torch.Tensor] = None, lids: Optional[torch.Tensor] = None, pitch: Optional[torch.Tensor] = None, energy: Optional[torch.Tensor] = None, alpha: float = 1.0, use_teacher_forcing: bool = False) → Dict[str, torch.Tensor][source]

Generate the sequence of features given the sequences of characters.

Parameters:
  • text (LongTensor) – Input sequence of characters (T_text,).

  • durations (Optional[Tensor) – Groundtruth of duration (T_text + 1,).

  • spembs (Optional[Tensor) – Speaker embedding vector (spk_embed_dim,).

  • sids (Optional[Tensor]) – Speaker ID (1,).

  • lids (Optional[Tensor]) – Language ID (1,).

  • pitch (Optional[Tensor]) – Groundtruth of token-avg pitch (T_text + 1, 1).

  • energy (Optional[Tensor]) – Groundtruth of token-avg energy (T_text + 1, 1).

  • alpha (float) – Alpha to control the speed.

  • use_teacher_forcing (bool) – Whether to use teacher forcing. If true, groundtruth of duration, pitch and energy will be used.

Returns:

Output dict including the following items:
  • feat_gen (Tensor): Output sequence of features (T_feats, odim).

  • duration (Tensor): Duration sequence (T_text + 1,).

  • pitch (Tensor): Pitch sequence (T_text + 1,).

  • energy (Tensor): Energy sequence (T_text + 1,).

Return type:

Dict[str, Tensor]