espnet2.gan_codec.funcodec.funcodec.FunCodec
espnet2.gan_codec.funcodec.funcodec.FunCodec
class espnet2.gan_codec.funcodec.funcodec.FunCodec(sampling_rate: int = 24000, generator_params: Dict[str, Any] = {'codec_domain': ['time', 'time'], 'decoder_final_activation': None, 'decoder_final_activation_params': None, 'decoder_trim_right_ratio': 1.0, 'domain_conf': {}, 'encdec_activation': 'ELU', 'encdec_activation_params': {'alpha': 1.0}, 'encdec_causal': False, 'encdec_channels': 1, 'encdec_compress': 2, 'encdec_dilation_base': 2, 'encdec_kernel_size': 7, 'encdec_last_kernel_size': 7, 'encdec_lstm': 2, 'encdec_n_filters': 32, 'encdec_n_residual_layers': 1, 'encdec_norm': 'weight_norm', 'encdec_norm_params': {}, 'encdec_pad_mode': 'reflect', 'encdec_ratios': [(8, 1), (5, 1), (4, 1), (2, 1)], 'encdec_residual_kernel_size': 7, 'encdec_true_skip': False, 'hidden_dim': 128, 'quantizer_bins': 1024, 'quantizer_decay': 0.99, 'quantizer_dropout': True, 'quantizer_kmeans_init': True, 'quantizer_kmeans_iters': 50, 'quantizer_n_q': 8, 'quantizer_target_bandwidth': [7.5, 15], 'quantizer_threshold_ema_dead_code': 2}, discriminator_params: Dict[str, Any] = {'complexstft_discriminator_params': {'chan_mults': (1, 2, 4, 4, 8, 8), 'channels': 32, 'hop_length': 256, 'in_channels': 1, 'logits_abs': True, 'n_fft': 1024, 'stft_normalized': False, 'strides': ((1, 2), (2, 2), (1, 2), (2, 2), (1, 2), (2, 2)), 'win_length': 1024}, 'period_discriminator_params': {'bias': True, 'channels': 32, 'downsample_scales': [3, 3, 3, 3, 1], 'in_channels': 1, 'kernel_sizes': [5, 3], 'max_downsample_channels': 1024, 'nonlinear_activation': 'LeakyReLU', 'nonlinear_activation_params': {'negative_slope': 0.1}, 'out_channels': 1, 'use_spectral_norm': False, 'use_weight_norm': True}, 'periods': [2, 3, 5, 7, 11], 'scale_discriminator_params': {'bias': True, 'channels': 128, 'downsample_scales': [2, 2, 4, 4, 1], 'in_channels': 1, 'kernel_sizes': [15, 41, 5, 3], 'max_downsample_channels': 1024, 'max_groups': 16, 'nonlinear_activation': 'LeakyReLU', 'nonlinear_activation_params': {'negative_slope': 0.1}, 'out_channels': 1}, 'scale_downsample_pooling': 'AvgPool1d', 'scale_downsample_pooling_params': {'kernel_size': 4, 'padding': 2, 'stride': 2}, 'scale_follow_official_norm': False, 'scales': 3}, generator_adv_loss_params: Dict[str, Any] = {'average_by_discriminators': False, 'loss_type': 'mse'}, discriminator_adv_loss_params: Dict[str, Any] = {'average_by_discriminators': False, 'loss_type': 'mse'}, use_feat_match_loss: bool = True, feat_match_loss_params: Dict[str, Any] = {'average_by_discriminators': False, 'average_by_layers': False, 'include_final_outputs': True}, use_mel_loss: bool = True, mel_loss_params: Dict[str, Any] = {'fmax': None, 'fmin': 0, 'fs': 24000, 'log_base': None, 'n_mels': 80, 'range_end': 11, 'range_start': 6, 'window': 'hann'}, use_dual_decoder: bool = False, lambda_quantization: float = 1.0, lambda_reconstruct: float = 1.0, lambda_commit: float = 1.0, lambda_adv: float = 1.0, lambda_feat_match: float = 2.0, lambda_mel: float = 45.0, cache_generator_outputs: bool = False)
Bases: AbsGANCodec
“FunCodec model.
Intialize FunCodec model.
- Parameters:TODO (jiatong)
decode(x: Tensor, **kwargs) → Tensor
Run encoding.
- Parameters:x (Tensor) – Input codes (T_code, N_stream).
- Returns: Generated waveform (T_wav,).
- Return type: Tensor
encode(x: Tensor, **kwargs) → Tensor
Run encoding.
- Parameters:x (Tensor) – Input audio (T_wav,).
- Returns: Generated codes (T_code, N_stream).
- Return type: Tensor
forward(audio: Tensor, forward_generator: bool = True, **kwargs) → Dict[str, Any]
Perform generator forward.
- Parameters:
- audio (Tensor) – Audio waveform tensor (B, T_wav).
- forward_generator (bool) – Whether to forward generator.
- Returns:
- loss (Tensor): Loss scalar tensor.
- stats (Dict[str, float]): Statistics to be monitored.
- weight (Tensor): Weight tensor to summarize losses.
- optim_idx (int): Optimizer index (0 for G and 1 for D).
- Return type: Dict[str, Any]
inference(x: Tensor, **kwargs) → Dict[str, Tensor]
Run inference.
- Parameters:x (Tensor) – Input audio (T_wav,).
- Returns:
- wav (Tensor): Generated waveform tensor (T_wav,).
- codec (Tensor): Generated neural codec (T_code, N_stream).
- Return type: Dict[str, Tensor]
meta_info() → Dict[str, Any]
Return meta information of the codec.