espnet2.tts.espnet_model.ESPnetTTSModel
espnet2.tts.espnet_model.ESPnetTTSModel
class espnet2.tts.espnet_model.ESPnetTTSModel(feats_extract: AbsFeatsExtract | None, pitch_extract: AbsFeatsExtract | None, energy_extract: AbsFeatsExtract | None, normalize: InversibleInterface | None, pitch_normalize: InversibleInterface | None, energy_normalize: InversibleInterface | None, tts: AbsTTS)
Bases: AbsESPnetModel
ESPnet model for text-to-speech task.
Initialize ESPnetTTSModel module.
collect_feats(text: Tensor, text_lengths: Tensor, speech: Tensor, speech_lengths: Tensor, durations: Tensor | None = None, durations_lengths: Tensor | None = None, pitch: Tensor | None = None, pitch_lengths: Tensor | None = None, energy: Tensor | None = None, energy_lengths: Tensor | None = None, spembs: Tensor | None = None, sids: Tensor | None = None, lids: Tensor | None = None, **kwargs) → Dict[str, Tensor]
Caclualte features and return them as a dict.
- Parameters:
- text (Tensor) – Text index tensor (B, T_text).
- text_lengths (Tensor) – Text length tensor (B,).
- speech (Tensor) – Speech waveform tensor (B, T_wav).
- speech_lengths (Tensor) – Speech length tensor (B,).
- durations (Optional *[*Tensor) – Duration tensor.
- durations_lengths (Optional *[*Tensor) – Duration length tensor (B,).
- pitch (Optional *[*Tensor) – Pitch tensor.
- pitch_lengths (Optional *[*Tensor) – Pitch length tensor (B,).
- energy (Optional *[*Tensor) – Energy tensor.
- energy_lengths (Optional *[*Tensor) – Energy length tensor (B,).
- spembs (Optional *[*Tensor ]) – Speaker embedding tensor (B, D).
- sids (Optional *[*Tensor ]) – Speaker ID tensor (B, 1).
- lids (Optional *[*Tensor ]) – Language ID tensor (B, 1).
- Returns: Dict of features.
- Return type: Dict[str, Tensor]
forward(text: Tensor, text_lengths: Tensor, speech: Tensor, speech_lengths: Tensor, durations: Tensor | None = None, durations_lengths: Tensor | None = None, pitch: Tensor | None = None, pitch_lengths: Tensor | None = None, energy: Tensor | None = None, energy_lengths: Tensor | None = None, spembs: Tensor | None = None, sids: Tensor | None = None, lids: Tensor | None = None, **kwargs) → Tuple[Tensor, Dict[str, Tensor], Tensor]
Caclualte outputs and return the loss tensor.
- Parameters:
- text (Tensor) – Text index tensor (B, T_text).
- text_lengths (Tensor) – Text length tensor (B,).
- speech (Tensor) – Speech waveform tensor (B, T_wav).
- speech_lengths (Tensor) – Speech length tensor (B,).
- duration (Optional *[*Tensor ]) – Duration tensor.
- duration_lengths (Optional *[*Tensor ]) – Duration length tensor (B,).
- pitch (Optional *[*Tensor ]) – Pitch tensor.
- pitch_lengths (Optional *[*Tensor ]) – Pitch length tensor (B,).
- energy (Optional *[*Tensor ]) – Energy tensor.
- energy_lengths (Optional *[*Tensor ]) – Energy length tensor (B,).
- spembs (Optional *[*Tensor ]) – Speaker embedding tensor (B, D).
- sids (Optional *[*Tensor ]) – Speaker ID tensor (B, 1).
- lids (Optional *[*Tensor ]) – Language ID tensor (B, 1).
- kwargs – “utt_id” is among the input.
- Returns: Loss scalar tensor. Dict[str, float]: Statistics to be monitored. Tensor: Weight tensor to summarize losses.
- Return type: Tensor
inference(text: Tensor, speech: Tensor | None = None, spembs: Tensor | None = None, sids: Tensor | None = None, lids: Tensor | None = None, durations: Tensor | None = None, pitch: Tensor | None = None, energy: Tensor | None = None, **decode_config) → Dict[str, Tensor]
Caclualte features and return them as a dict.
- Parameters:
- text (Tensor) – Text index tensor (T_text).
- speech (Tensor) – Speech waveform tensor (T_wav).
- spembs (Optional *[*Tensor ]) – Speaker embedding tensor (D,).
- sids (Optional *[*Tensor ]) – Speaker ID tensor (1,).
- lids (Optional *[*Tensor ]) – Language ID tensor (1,).
- durations (Optional *[*Tensor) – Duration tensor.
- pitch (Optional *[*Tensor) – Pitch tensor.
- energy (Optional *[*Tensor) – Energy tensor.
- Returns: Dict of outputs.
- Return type: Dict[str, Tensor]