espnet.nets.pytorch_backend.transducer.custom_decoder.CustomDecoder
espnet.nets.pytorch_backend.transducer.custom_decoder.CustomDecoder
class espnet.nets.pytorch_backend.transducer.custom_decoder.CustomDecoder(odim: int, dec_arch: List, input_layer: str = 'embed', repeat_block: int = 0, joint_activation_type: str = 'tanh', positional_encoding_type: str = 'abs_pos', positionwise_layer_type: str = 'linear', positionwise_activation_type: str = 'relu', input_layer_dropout_rate: float = 0.0, blank_id: int = 0)
Bases: TransducerDecoderInterface
, Module
Custom decoder module for Transducer model.
- Parameters:
- odim – Output dimension.
- dec_arch – Decoder block architecture (type and parameters).
- input_layer – Input layer type.
- repeat_block – Number of times dec_arch is repeated.
- joint_activation_type – Type of activation for joint network.
- positional_encoding_type – Positional encoding type.
- positionwise_layer_type – Positionwise layer type.
- positionwise_activation_type – Positionwise activation type.
- input_layer_dropout_rate – Dropout rate for input layer.
- blank_id – Blank symbol ID.
Construct a CustomDecoder object.
batch_score(hyps: List[Hypothesis] | List[ExtendedHypothesis], dec_states: List[Tensor | None], cache: Dict[str, Any], use_lm: bool) → Tuple[Tensor, List[Tensor | None], Tensor]
One-step forward hypotheses.
- Parameters:
- hyps – Hypotheses.
- dec_states – Decoder hidden states. [N x (B, U, D_dec)]
- cache – Pairs of (h_dec, dec_states) for each label sequences. (keys)
- use_lm – Whether to compute label ID sequences for LM.
- Returns: Decoder output sequences. (B, D_dec) dec_states: Decoder hidden states. [N x (B, U, D_dec)] lm_labels: Label ID sequences for LM. (B,)
- Return type: dec_out
create_batch_states(states: List[Tensor | None], new_states: List[Tensor | None], check_list: List[List[int]]) → List[Tensor | None]
Create decoder hidden states sequences.
- Parameters:
- states – Decoder hidden states. [N x (B, U, D_dec)]
- new_states – Decoder hidden states. [B x [N x (1, U, D_dec)]]
- check_list – Label ID sequences.
- Returns: New decoder hidden states. [N x (B, U, D_dec)]
- Return type: states
forward(dec_input: Tensor, dec_mask: Tensor) → Tuple[Tensor, Tensor]
Encode label ID sequences.
- Parameters:
- dec_input – Label ID sequences. (B, U)
- dec_mask – Label mask sequences. (B, U)
- Returns: Decoder output sequences. (B, U, D_dec) dec_output_mask: Mask of decoder output sequences. (B, U)
- Return type: dec_output
init_state(batch_size: int | None = None) → List[Tensor | None]
Initialize decoder states.
- Parameters:batch_size – Batch size.
- Returns: Initial decoder hidden states. [N x None]
- Return type: state
score(hyp: Hypothesis, cache: Dict[str, Any]) → Tuple[Tensor, List[Tensor | None], Tensor]
One-step forward hypothesis.
- Parameters:
- hyp – Hypothesis.
- cache – Pairs of (dec_out, dec_state) for each label sequence. (key)
- Returns: Decoder output sequence. (1, D_dec) dec_state: Decoder hidden states. [N x (1, U, D_dec)] lm_label: Label ID for LM. (1,)
- Return type: dec_out
select_state(states: List[Tensor | None], idx: int) → List[Tensor | None]
Get specified ID state from decoder hidden states.
- Parameters:
- states – Decoder hidden states. [N x (B, U, D_dec)]
- idx – State ID to extract.
- Returns: Decoder hidden state for given ID. [N x (1, U, D_dec)]
- Return type: state_idx
set_device(device: device)
Set GPU device to use.
- Parameters:device – Device ID.