tf.nn.convolution

View source on GitHub

Computes sums of N-D convolutions (actually cross-correlation).

tf.nn.convolution(
    input, filters, strides=None, padding='VALID', data_format=None, dilations=None,
    name=None
)

This also supports either output striding via the optional strides parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional dilations parameter. Currently, however, output striding is not supported for atrous convolutions.

Specifically, in the case that data_format does not start with "NC", given a rank (N+2) input Tensor of shape

[num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels],

a rank (N+2) filters Tensor of shape

[spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels],

an optional dilations tensor of shape N specifying the filter upsampling/input downsampling rate, and an optional list of N strides (defaulting [1]*N), this computes for each N-D spatial output position (x[0], ..., x[N-1]):

output[b, x[0], ..., x[N-1], k] =
      sum_{z[0], ..., z[N-1], q}
          filter[z[0], ..., z[N-1], q, k] *
          padded_input[b,
                       x[0]*strides[0] + dilation_rate[0]*z[0],
                       ...,
                       x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1],
                       q]

where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, padded_input is obtained by zero padding the input using an effective spatial filter shape of (spatial_filter_shape-1) * dilation_rate + 1 and output striding strides as described in the comment here.

In the case that data_format does start with "NC", the input and output (but not the filters) are simply transposed as follows:

convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1))

It is required that 1 <= N <= 3.

Args:

Returns:

A Tensor with the same type as input of shape

`[batch_size] + output_spatial_shape + [out_channels]`

if data_format is None or does not start with "NC", or

`[batch_size, out_channels] + output_spatial_shape`

if data_format starts with "NC", where output_spatial_shape depends on the value of padding.

If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])

If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).

Raises: