espnet2.mt.frontend.embedding.CodecEmbedding
espnet2.mt.frontend.embedding.CodecEmbedding
class espnet2.mt.frontend.embedding.CodecEmbedding(input_size, hf_model_tag: str = 'espnet/amuse_encodec_16k', token_bias: int = 2, token_per_frame: int = 8, pos_enc_class=<class 'espnet.nets.pytorch_backend.transformer.embedding.PositionalEncoding'>, positional_dropout_rate: float = 0.1)
Bases: AbsFrontend
Use codec dequantization process and the input embeddings
Initialize.
- Parameters:
- hf_model_tag – HuggingFace model tag for Espnet codec models
- token_bias – the index of the first codec code
- pos_enc_class – PositionalEncoding or ScaledPositionalEncoding
- positional_dropout_rate – dropout rate after adding positional encoding
forward(input: Tensor, input_lengths: Tensor)
Defines the computation performed at every call.
Should be overridden by all subclasses.
NOTE
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
output_size() → int
Return output length of feature dimension D, i.e. the embedding dim.