espnet2.enh.layers.ncsnpp_utils.up_or_down_sampling.conv_downsample_2d
Less than 1 minute
espnet2.enh.layers.ncsnpp_utils.up_or_down_sampling.conv_downsample_2d
espnet2.enh.layers.ncsnpp_utils.up_or_down_sampling.conv_downsample_2d(x, w, k=None, factor=2, gain=1)
Fused tf.nn.conv2d() followed by downsample_2d().
Padding is performed only once at the beginning, not between the operations. The fused op is considerably more efficient than performing the same calculation using standard TensorFlow ops. It supports gradients of arbitrary order. :param x: Input tensor of the shape [N, C, H, W] or
`
[N, H, W,
C]`.
- Parameters:
- w – Weight tensor of the shape [filterH, filterW, inChannels, outChannels]. Grouped convolution can be performed by inChannels = x.shape[0] // numGroups.
- k – FIR filter of the shape [firH, firW] or [firN] (separable). The default is [1] * factor, which corresponds to average pooling.
- factor – Integer downsampling factor (default: 2).
- gain – Scaling factor for signal magnitude (default: 1.0).
- Returns: Tensor of the shape [N, C, H // factor, W // factor] or [N, H // factor, W // factor, C], and same datatype as x.