espnet.nets.pytorch_backend.transformer.dynamic_conv.DynamicConvolution
Less than 1 minute
espnet.nets.pytorch_backend.transformer.dynamic_conv.DynamicConvolution
class espnet.nets.pytorch_backend.transformer.dynamic_conv.DynamicConvolution(wshare, n_feat, dropout_rate, kernel_size, use_kernel_mask=False, use_bias=False)
Bases: Module
Dynamic Convolution layer.
This implementation is based on https://github.com/pytorch/fairseq/tree/master/fairseq
- Parameters:
- wshare (int) – the number of kernel of convolution
- n_feat (int) – the number of features
- dropout_rate (float) – dropout_rate
- kernel_size (int) – kernel size (length)
- use_kernel_mask (bool) – Use causal mask or not for convolution kernel
- use_bias (bool) – Use bias term or not.
Construct Dynamic Convolution layer.
forward(query, key, value, mask)
Forward of ‘Dynamic Convolution’.
This function takes query, key and value but uses only quert. This is just for compatibility with self-attention layer (attention.py)
- Parameters:
- query (torch.Tensor) – (batch, time1, d_model) input tensor
- key (torch.Tensor) – (batch, time2, d_model) NOT USED
- value (torch.Tensor) – (batch, time2, d_model) NOT USED
- mask (torch.Tensor) – (batch, time1, time2) mask
- Returns: (batch, time1, d_model) output
- Return type: x (torch.Tensor)