espnet2.asr.encoder.whisper_encoder.OpenAIWhisperEncoder
espnet2.asr.encoder.whisper_encoder.OpenAIWhisperEncoder
class espnet2.asr.encoder.whisper_encoder.OpenAIWhisperEncoder(input_size: int = 1, dropout_rate: float = 0.0, whisper_model: str = 'small', download_dir: str | None = None, use_specaug: bool = False, specaug_conf: dict | None = None, do_pad_trim: bool = False)
Bases: AbsEncoder
Transformer-based Speech Encoder from OpenAI’s Whisper Model:
URL: https://github.com/openai/whisper
Initializes internal Module state, shared by both nn.Module and ScriptModule.
forward(xs_pad: Tensor, ilens: Tensor, prev_states: Tensor | None = None) → Tuple[Tensor, Tensor, Tensor | None]
Defines the computation performed at every call.
Should be overridden by all subclasses.
NOTE
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
log_mel_spectrogram(audio: Tensor, ilens: Tensor | None = None) → Tensor
Use log-mel spectrogram computation native to Whisper training
output_size() → int
pad_or_trim(array: Tensor, length: int, axis: int = -1) → Tensor
Pad or trim the audio array to N_SAMPLES.
Used in zero-shot inference cases.
whisper_encode(input: Tensor, ilens: Tensor | None = None) → Tensor