conv – Convolution

Note

Two similar implementation exists for conv2d:

The former implements a traditional 2D convolution, while the latter implements the convolutional layers present in convolutional neural networks (where filters are 3D and pool over several input channels).

aesara.tensor.signal.conv.conv2d(input, filters, image_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), **kargs)[source]

signal.conv.conv2d performs a basic 2D convolution of the input with the given filters. The input parameter can be a single 2D image or a 3D tensor, containing a set of images. Similarly, filters can be a single 2D filter or a 3D tensor, corresponding to a set of 2D filters.

Shape parameters are optional and will result in faster execution.

Parameters:
  • input (Symbolic aesara tensor for images to be filtered.) – Dimensions: ([num_images], image height, image width)
  • filters (Symbolic aesara tensor for convolution filter(s).) – Dimensions: ([num_filters], filter height, filter width)
  • border_mode ({'valid', 'full'}) – See scipy.signal.convolve2d.
  • subsample – Factor by which to subsample output.
  • image_shape (tuple of length 2 or 3) – ([num_images,] image height, image width).
  • filter_shape (tuple of length 2 or 3) – ([num_filters,] filter height, filter width).
  • kwargs – See aesara.tensor.nnet.conv.conv2d.
Returns:

Tensor of filtered images, with shape ([number images,] [number filters,] image height, image width).

Return type:

symbolic 2D,3D or 4D tensor

aesara.tensor.signal.conv.fft(*todo)[source]

[James has some code for this, but hasn’t gotten it into the source tree yet.]