espnet2.asr_transducer.decoder.rwkv_decoder.RWKVDecoder
espnet2.asr_transducer.decoder.rwkv_decoder.RWKVDecoder
class espnet2.asr_transducer.decoder.rwkv_decoder.RWKVDecoder(vocab_size: int, block_size: int = 512, context_size: int = 1024, linear_size: int | None = None, attention_size: int | None = None, normalization_type: str = 'layer_norm', normalization_args: Dict = {}, num_blocks: int = 4, rescale_every: int = 0, embed_dropout_rate: float = 0.0, att_dropout_rate: float = 0.0, ffn_dropout_rate: float = 0.0, embed_pad: int = 0)
Bases: AbsDecoder
RWKV decoder module.
Based on https://arxiv.org/pdf/2305.13048.pdf.
- Parameters:
- vocab_size – Vocabulary size.
- block_size – Input/Output size.
- context_size – Context size for WKV computation.
- linear_size – FeedForward hidden size.
- attention_size – SelfAttention hidden size.
- normalization_type – Normalization layer type.
- normalization_args – Normalization layer arguments.
- num_blocks – Number of RWKV blocks.
- rescale_every – Whether to rescale input every N blocks (inference only).
- embed_dropout_rate – Dropout rate for embedding layer.
- att_dropout_rate – Dropout rate for the attention module.
- ffn_dropout_rate – Dropout rate for the feed-forward module.
- embed_pad – Embedding padding symbol ID.
Construct a RWKVDecoder object.
batch_score(hyps: List[Hypothesis]) → Tuple[Tensor, List[Tensor]]
One-step forward hypotheses.
- Parameters:hyps – Hypotheses.
- Returns: Decoder output sequence. (B, D_dec) states: Decoder hidden states. [5 x (B, 1, D_att/D_dec, N)]
- Return type: out
create_batch_states(new_states: List[List[Dict[str, Tensor]]]) → List[Tensor]
Create batch of decoder hidden states given a list of new states.
- Parameters:new_states – Decoder hidden states. [B x [5 x (1, 1, D_att/D_dec, N)]
- Returns: Decoder hidden states. [5 x (B, 1, D_att/D_dec, N)]
forward(labels: Tensor) → Tensor
Encode source label sequences.
- Parameters:labels – Decoder input sequences. (B, L)
- Returns: Decoder output sequences. (B, U, D_dec)
- Return type: out
inference(labels: Tensor, states: Tensor) → Tuple[Tensor, List[Tensor]]
Encode source label sequences.
- Parameters:
- labels – Decoder input sequences. (B, L)
- states – Decoder hidden states. [5 x (B, D_att/D_dec, N)]
- Returns: Decoder output sequences. (B, U, D_dec) states: Decoder hidden states. [5 x (B, D_att/D_dec, N)]
- Return type: out
init_state(batch_size: int = 1) → List[Tensor]
Initialize RWKVDecoder states.
- Parameters:batch_size – Batch size.
- Returns: Decoder hidden states. [5 x (B, 1, D_att/D_dec, N)]
- Return type: states
score(label_sequence: List[int], states: List[Tensor]) → Tuple[Tensor, List[Tensor]]
One-step forward hypothesis.
- Parameters:
- label_sequence – Current label sequence.
- states – Decoder hidden states. [5 x (1, 1, D_att/D_dec, N)]
- Returns: Decoder output sequence. (D_dec) states: Decoder hidden states. [5 x (1, 1, D_att/D_dec, N)]
select_state(states: List[Tensor], idx: int) → List[Tensor]
Select ID state from batch of decoder hidden states.
- Parameters:states – Decoder hidden states. [5 x (B, 1, D_att/D_dec, N)]
- Returns: Decoder hidden states for given ID. [5 x (1, 1, D_att/D_dec, N)]
set_device(device: device) → None
Set GPU device to use.
- Parameters:device – Device ID.