conv
– Convolution¶
Note
Two similar implementation exists for conv2d:
signal.conv2d
andnnet.conv2d
.
The former implements a traditional 2D convolution, while the latter implements the convolutional layers present in convolutional neural networks (where filters are 3D and pool over several input channels).
-
theano.tensor.signal.conv.
conv2d
(input, filters, image_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), **kargs)¶ signal.conv.conv2d performs a basic 2D convolution of the input with the given filters. The input parameter can be a single 2D image or a 3D tensor, containing a set of images. Similarly, filters can be a single 2D filter or a 3D tensor, corresponding to a set of 2D filters.
Shape parameters are optional and will result in faster execution.
Parameters: - input (dmatrix of dtensor3) – Symbolic variable for images to be filtered.
- filters (dmatrix of dtensor3) – Symbolic variable containing filter values.
- border_mode ({‘valid’, ‘full’}) – See scipy.signal.convolve2d.
- subsample – Factor by which to subsample output.
- image_shape (tuple of length 2 or 3) – ([number images,] image height, image width).
- filter_shape (tuple of length 2 or 3) – ([number filters,] filter height, filter width).
- kwargs – See theano.tensor.nnet.conv.conv2d.
Returns: Tensor of filtered images, with shape ([number images,] [number filters,] image height, image width).
Return type: symbolic 2D,3D or 4D tensor
-
conv.
fft
(*todo)¶ [James has some code for this, but hasn’t gotten it into the source tree yet.]