espnet.nets.pytorch_backend.e2e_tts_tacotron2.GuidedAttentionLoss
Less than 1 minute
espnet.nets.pytorch_backend.e2e_tts_tacotron2.GuidedAttentionLoss
class espnet.nets.pytorch_backend.e2e_tts_tacotron2.GuidedAttentionLoss(sigma=0.4, alpha=1.0, reset_always=True)
Bases: Module
Guided attention loss function module.
This module calculates the guided attention loss described in Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention, which forces the attention to be diagonal.
Initialize guided attention loss module.
- Parameters:
- sigma (float , optional) – Standard deviation to control how close attention to a diagonal.
- alpha (float , optional) – Scaling coefficient (lambda).
- reset_always (bool , optional) – Whether to always reset masks.
forward(att_ws, ilens, olens)
Calculate forward propagation.
- Parameters:
- att_ws (Tensor) – Batch of attention weights (B, T_max_out, T_max_in).
- ilens (LongTensor) – Batch of input lengths (B,).
- olens (LongTensor) – Batch of output lengths (B,).
- Returns: Guided attention loss value.
- Return type: Tensor