conv – Ops for convolutional neural nets

Note

Two similar implementation exists for conv2d:

The former implements a traditional 2D convolution, while the latter implements the convolutional layers present in convolutional neural networks (where filters are 3D and pool over several input channels).

The recommended user interface are:

With those new interface, Aesara will automatically use the fastest implementation in many cases. On the CPU, the implementation is a GEMM based one.

This auto-tuning has the inconvenience that the first call is much slower as it tries and times each implementation it has. So if you benchmark, it is important that you remove the first call from your timing.

Implementation Details

This section gives more implementation detail. Most of the time you do not need to read it. Aesara will select it for you.

  • Implemented operators for neural network 2D / image convolution:
    • nnet.conv.conv2d. old 2d convolution. DO NOT USE ANYMORE.

      For each element in a batch, it first creates a Toeplitz matrix in a CUDA kernel. Then, it performs a gemm call to multiply this Toeplitz matrix and the filters (hence the name: MM is for matrix multiplication). It needs extra memory for the Toeplitz matrix, which is a 2D matrix of shape (no of channels * filter width * filter height, output width * output height).

    • CorrMM This is a CPU-only 2d correlation implementation taken from caffe’s cpp implementation. It does not flip the kernel.

  • Implemented operators for neural network 3D / video convolution:
    • Corr3dMM This is a CPU-only 3d correlation implementation based on the 2d version (CorrMM). It does not flip the kernel. As it provides a gradient, you can use it as a replacement for nnet.conv3d. For convolutions done on CPU, nnet.conv3d will be replaced by Corr3dMM.
    • conv3d2d Another conv3d implementation that uses the conv2d with data reshaping. It is faster in some corner cases than conv3d. It flips the kernel.
aesara.tensor.nnet.conv2d(input, filters, input_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True, image_shape=None, filter_dilation=(1, 1), num_groups=1, unshared=False, **kwargs)[source]

This function will build the symbolic graph for convolving a mini-batch of a stack of 2D inputs with a set of 2D filters. The implementation is modelled after Convolutional Neural Networks (CNN).

Parameters:
  • input (symbolic 4D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). See the optional parameter input_shape.
  • filters (symbolic 4D or 6D tensor) – Set of filters used in CNN layer of shape (output channels, input channels, filter rows, filter columns) for normal convolution and (output channels, output rows, output columns, input channels, filter rows, filter columns) for unshared convolution. See the optional parameter filter_shape.
  • input_shape (None, tuple/list of len 4 or 6 of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • filter_shape (None, tuple/list of len 4 or 6 of int or Constant variable) – The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or a tuple of two ints or pairs of ints) –

    Either of the following:

    'valid': apply filter wherever it completely overlaps with the
    input. Generates output of shape: input shape - filter shape + 1
    'full': apply filter wherever it partly overlaps with the input.
    Generates output of shape: input shape + filter shape - 1
    'half': pad input with a symmetric border of filter rows // 2
    rows and filter columns // 2 columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape.
    int: pad input with a symmetric border of zeros of the given
    width, then perform a valid convolution.
    (int1, int2): (for 2D) pad input with a symmetric border of int1,
    int2, then perform a valid convolution.
    (int1, (int2, int3)) or ((int1, int2), int3): (for 2D)
    pad input with one symmetric border of int1` or int3, and one asymmetric border of (int2, int3) or (int1, int2).
  • subsample (tuple of len 2) – Factor by which to subsample the output. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • image_shape (None, tuple/list of len 4 of int or Constant variable) – Deprecated alias for input_shape.
  • filter_dilation (tuple of len 2) – Factor by which to subsample (stride) the input. Also called dilation elsewhere.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
  • unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input.
  • kwargs (Any other keyword arguments are accepted for backwards) – compatibility, but will be ignored.
Returns:

Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns)

Return type:

Symbolic 4D tensor

aesara.tensor.nnet.conv2d_transpose(input, filters, output_shape, filter_shape=None, border_mode='valid', input_dilation=(1, 1), filter_flip=True, filter_dilation=(1, 1), num_groups=1, unshared=False)[source]

This function will build the symbolic graph for applying a transposed convolution over a mini-batch of a stack of 2D inputs with a set of 2D filters.

Parameters:
  • input (symbolic 4D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). See the optional parameter input_shape.
  • filters (symbolic 4D tensor) – Set of filters used in CNN layer of shape (input channels, output channels, filter rows, filter columns). See the optional parameter filter_shape. Note: the order for ``output_channels`` and ``input_channels`` is reversed with respect to ``conv2d``.
  • output_shape (tuple/list of len 4 of int or Constant variable) – The shape of the output of conv2d_transpose. The last two elements are allowed to be aesara.tensor.type.scalar variables.
  • filter_shape (None, tuple/list of len 4 of int or Constant variable) – The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or tuple of two int) – Refers to the border_mode argument of the corresponding forward (non-transposed) convolution. See the argument description in conv2d. What was padding for the forward convolution means cropping the output of the transposed one. valid corresponds to no cropping, full to maximal cropping.
  • input_dilation (tuple of len 2) – Corresponds to subsample (also called strides elsewhere) in the non-transposed convolution.
  • filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 2) – Factor by which to subsample (stride) the input. Also called dilation elsewhere.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
  • unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input. Grouped unshared convolution is supported.
Returns:

Set of feature maps generated by the transposed convolution. Tensor is of shape (batch size, output channels, output rows, output columns)

Return type:

Symbolic 4D tensor

aesara.tensor.nnet.conv3d(input, filters, input_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1, 1), filter_flip=True, filter_dilation=(1, 1, 1), num_groups=1)[source]

This function will build the symbolic graph for convolving a mini-batch of a stack of 3D inputs with a set of 3D filters. The implementation is modelled after Convolutional Neural Networks (CNN).

Parameters:
  • input (symbolic 5D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input depth, input rows, input columns). See the optional parameter input_shape.
  • filters (symbolic 5D tensor) – Set of filters used in CNN layer of shape (output channels, input channels, filter depth, filter rows, filter columns). See the optional parameter filter_shape.
  • input_shape (None, tuple/list of len 5 of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • filter_shape (None, tuple/list of len 5 of int or Constant variable) – The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or tuple of three int) –

    Either of the following:

    'valid': apply filter wherever it completely overlaps with the
    input. Generates output of shape: input shape - filter shape + 1
    'full': apply filter wherever it partly overlaps with the input.
    Generates output of shape: input shape + filter shape - 1
    'half': pad input with a symmetric border of filter // 2,
    then perform a valid convolution. For filters with an odd number of slices, rows and columns, this leads to the output shape being equal to the input shape.
    int: pad input with a symmetric border of zeros of the given
    width, then perform a valid convolution.
    (int1, int2, int3)
    pad input with a symmetric border of int1, int2 and int3 columns, then perform a valid convolution.
  • subsample (tuple of len 3) – Factor by which to subsample the output. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filter x, y and z dimensions before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 3) – Factor by which to subsample (stride) the input. Also called dilation elsewhere.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
Returns:

Set of feature maps generated by convolutional layer. Tensor is is of shape (batch size, output channels, output depth, output rows, output columns)

Return type:

Symbolic 5D tensor

aesara.tensor.nnet.conv3d2d.conv3d(signals, filters, signals_shape=None, filters_shape=None, border_mode='valid')[source]

Convolve spatio-temporal filters with a movie.

It flips the filters.

Parameters:
  • signals – Timeseries of images whose pixels have color channels. Shape: [Ns, Ts, C, Hs, Ws].
  • filters – Spatio-temporal filters. Shape: [Nf, Tf, C, Hf, Wf].
  • signals_shape – None or a tuple/list with the shape of signals.
  • filters_shape – None or a tuple/list with the shape of filters.
  • border_mode – One of ‘valid’, ‘full’ or ‘half’.

Notes

Another way to define signals: (batch, time, in channel, row, column) Another way to define filters: (out channel,time,in channel, row, column)

See also

Someone made a script that shows how to swap the axes between both 3d convolution implementations in Aesara. See the last attachment

aesara.tensor.nnet.conv.conv2d(input, filters, image_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), **kargs)[source]

Deprecated, old conv2d interface. This function will build the symbolic graph for convolving a stack of input images with a set of filters. The implementation is modelled after Convolutional Neural Networks (CNN). It is simply a wrapper to the ConvOp but provides a much cleaner interface.

Parameters:
  • input (symbolic 4D tensor) – Mini-batch of feature map stacks, of shape (batch size, stack size, nb row, nb col) see the optional parameter image_shape
  • filters (symbolic 4D tensor) – Set of filters used in CNN layer of shape (nb filters, stack size, nb row, nb col) see the optional parameter filter_shape
  • border_mode ({'valid', 'full'}) – ‘valid’only apply filter to complete patches of the image. Generates output of shape: image_shape - filter_shape + 1. ‘full’ zero-pads image to multiple of filter shape to generate output of shape: image_shape + filter_shape - 1.
  • subsample (tuple of len 2) – Factor by which to subsample the output. Also called strides elsewhere.
  • image_shape (None, tuple/list of len 4 of int, None or Constant variable) – The shape of the input parameter. Optional, used for optimization like loop unrolling You can put None for any element of the list to tell that this element is not constant.
  • filter_shape (None, tuple/list of len 4 of int, None or Constant variable) – Optional, used for optimization like loop unrolling You can put None for any element of the list to tell that this element is not constant.
  • kwargs

    Kwargs are passed onto ConvOp. Can be used to set the following: unroll_batch, unroll_kern, unroll_patch, openmp (see ConvOp doc).

    openmp: By default have the same value as
    config.openmp. For small image, filter, batch size, nkern and stack size, it can be faster to disable manually openmp. A fast and incomplete test show that with image size 6x6, filter size 4x4, batch size==1, n kern==1 and stack size==1, it is faster to disable it in valid mode. But if we grow the batch size to 10, it is faster with openmp on a core 2 duo.
Returns:

Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, nb filters, output row, output col).

Return type:

symbolic 4D tensor

Abstract conv interface

class aesara.tensor.nnet.abstract_conv.AbstractConv(convdim, imshp=None, kshp=None, border_mode='valid', subsample=None, filter_flip=True, filter_dilation=None, num_groups=1, unshared=False)[source]

Abstract Op for the forward convolution. Refer to BaseAbstractConv for a more detailed documentation.

R_op(inputs, eval_points)[source]

Construct a graph for the R-operator.

This method is primarily used by Rop.

Suppose the Op outputs [ f_1(inputs), ..., f_n(inputs) ].

Parameters:
  • inputs – The Op inputs.
  • eval_points – A Variable or list of Variables with the same length as inputs. Each element of eval_points specifies the value of the corresponding input at the point where the R-operator is to be evaluated.
Return type:

rval[i] should be Rop(f=f_i(inputs), wrt=inputs, eval_points=eval_points).

make_node(img, kern)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inp, out_)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.
  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.nnet.abstract_conv.AbstractConv2d(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1), filter_flip=True, filter_dilation=(1, 1), num_groups=1, unshared=False)[source]

Abstract Op for the forward convolution. Refer to BaseAbstractConv for a more detailed documentation.

grad(inp, grads)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

class aesara.tensor.nnet.abstract_conv.AbstractConv2d_gradInputs(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1), filter_flip=True, filter_dilation=(1, 1), num_groups=1, unshared=False)[source]

Gradient wrt. inputs for AbstractConv2d. Refer to BaseAbstractConv for a more detailed documentation.

Note:You will not want to use this directly, but rely on Aesara’s automatic differentiation or graph optimization to use it as needed.
grad(inp, grads)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

class aesara.tensor.nnet.abstract_conv.AbstractConv2d_gradWeights(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1), filter_flip=True, filter_dilation=(1, 1), num_groups=1, unshared=False)[source]

Gradient wrt. filters for AbstractConv2d. Refer to BaseAbstractConv for a more detailed documentation.

Note:You will not want to use this directly, but rely on Aesara’s automatic differentiation or graph optimization to use it as needed.
grad(inp, grads)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

class aesara.tensor.nnet.abstract_conv.AbstractConv3d(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1, 1), filter_flip=True, filter_dilation=(1, 1, 1), num_groups=1)[source]

Abstract Op for the forward convolution. Refer to BaseAbstractConv for a more detailed documentation.

grad(inp, grads)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

class aesara.tensor.nnet.abstract_conv.AbstractConv3d_gradInputs(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1, 1), filter_flip=True, filter_dilation=(1, 1, 1), num_groups=1)[source]

Gradient wrt. inputs for AbstractConv3d. Refer to BaseAbstractConv for a more detailed documentation.

Note:You will not want to use this directly, but rely on Aesara’s automatic differentiation or graph optimization to use it as needed.
grad(inp, grads)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

class aesara.tensor.nnet.abstract_conv.AbstractConv3d_gradWeights(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1, 1), filter_flip=True, filter_dilation=(1, 1, 1), num_groups=1)[source]

Gradient wrt. filters for AbstractConv3d. Refer to BaseAbstractConv for a more detailed documentation.

Note:You will not want to use this directly, but rely on Aesara’s automatic differentiation or graph optimization to use it as needed.
grad(inp, grads)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

class aesara.tensor.nnet.abstract_conv.AbstractConv_gradInputs(convdim, imshp=None, kshp=None, border_mode='valid', subsample=None, filter_flip=True, filter_dilation=None, num_groups=1, unshared=False)[source]

Gradient wrt. inputs for AbstractConv. Refer to BaseAbstractConv for a more detailed documentation.

Note:You will not want to use this directly, but rely on Aesara’s automatic differentiation or graph optimization to use it as needed.
make_node(kern, topgrad, shape, add_assert_shape=True)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inp, out_)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.
  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.nnet.abstract_conv.AbstractConv_gradWeights(convdim, imshp=None, kshp=None, border_mode='valid', subsample=None, filter_flip=True, filter_dilation=None, num_groups=1, unshared=False)[source]

Gradient wrt. filters for AbstractConv. Refer to BaseAbstractConv for a more detailed documentation.

Note:You will not want to use this directly, but rely on Aesara’s automatic differentiation or graph optimization to use it as needed.
make_node(img, topgrad, shape, add_assert_shape=True)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inp, out_)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.
  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.nnet.abstract_conv.BaseAbstractConv(convdim, imshp=None, kshp=None, border_mode='valid', subsample=None, filter_flip=True, filter_dilation=None, num_groups=1, unshared=False)[source]

Base class for AbstractConv

Parameters:
  • convdim (The number of convolution dimensions (2 or 3).) –
  • imshp (None, tuple/list of len (2 + convdim) of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time. imshp is defined w.r.t the forward conv.
  • kshp (None, tuple/list of len (2 + convdim) or (2 + 2 * convdim)) – (for unshared) of int or Constant variable The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time. kshp is defined w.r.t the forward conv.
border_mode: str, int or a tuple of two ints or pairs of ints

Either of the following:

'valid': apply filter wherever it completely overlaps with the
input. Generates output of shape: input shape - filter shape + 1
'full': apply filter wherever it partly overlaps with the input.
Generates output of shape: input shape + filter shape - 1
'half': pad input with a symmetric border of filter size // 2
in each convolution dimension, then perform a valid convolution. For filters with an odd filter size, this leads to the output shape being equal to the input shape.
int: pad input with a symmetric border of zeros of the given
width, then perform a valid convolution.
(int1, int2): (for 2D) pad input with a symmetric border of int1,
int2, then perform a valid convolution.
(int1, (int2, int3)) or ((int1, int2), int3): (for 2D)
pad input with one symmetric border of int1` or int3, and one asymmetric border of (int2, int3) or (int1, int2).
((int1, int2), (int3, int4)): (for 2D) pad input with an asymmetric
border of (int1, int2) along one dimension and (int3, int4) along the second dimension.
(int1, int2, int3): (for 3D) pad input with a symmetric border of
int1, int2 and int3, then perform a valid convolution.
subsample: tuple of len convdim
Factor by which to subsample the output. Also called strides elsewhere.
filter_flip: bool
If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
filter_dilation: tuple of len convdim
Factor by which to subsample (stride) the input. Also called dilation factor.
num_groupsint
Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
unshared: bool
If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input.
conv(img, kern, mode='valid', dilation=1, num_groups=1, unshared=False, direction='forward')[source]

Basic slow Python 2D or 3D convolution for DebugMode

do_constant_folding(fgraph, node)[source]

Determine whether or not constant folding should be performed for the given node.

This allows each Op to determine if it wants to be constant folded when all its inputs are constant. This allows it to choose where it puts its memory/speed trade-off. Also, it could make things faster as constants can’t be used for in-place operations (see *IncSubtensor).

Parameters:node (Apply) – The node for which the constant folding determination is made.
Returns:res
Return type:bool
flops(inp, outp)[source]

Useful with the hack in profiling to print the MFlops

unshared2d(inp, kern, out_shape, direction='forward')[source]

Basic slow Python unshared 2d convolution.

aesara.tensor.nnet.abstract_conv.abstract_conv2d(input, filters, input_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True, filter_dilation=(1, 1), num_groups=1, unshared=False)[source]

This function will build the symbolic graph for convolving a mini-batch of a stack of 2D inputs with a set of 2D filters. The implementation is modelled after Convolutional Neural Networks (CNN).

Refer to nnet.conv2d for a more detailed documentation.

aesara.tensor.nnet.abstract_conv.assert_conv_shape(shape)[source]

This function adds Assert nodes that check if shape is a valid convolution shape.

The first two dimensions should be larger than or equal to zero. The convolution dimensions should be larger than zero.

Parameters:shape (tuple of int (symbolic or numeric) corresponding to the input, output or) – kernel shape of a convolution. For input and output, the first elements should should be the batch size and number of channels. For kernels, the first and second elements should contain the number of input and output channels. The remaining dimensions are the convolution dimensions.
Returns:
  • Returns a tuple similar to the given shape. For constant elements in shape,
  • the function checks the value and raises a ValueError if the dimension is invalid.
  • The elements that are not constant are wrapped in an Assert op that checks the
  • dimension at run time.
aesara.tensor.nnet.abstract_conv.assert_shape(x, expected_shape, msg='Unexpected shape.')[source]

Wraps x in an Assert to check its shape.

Parameters:
  • x (TensorVariable) – x will be wrapped in an Assert.
  • expected_shape (tuple or list) – The expected shape of x. The size of a dimension can be None, which means it will not be checked.
  • msg (str) – The error message of the Assert.
Returns:

x wrapped in an Assert. At execution time, this will throw an AssertionError if the shape of x does not match expected_shape. If expected_shape is None or contains only Nones, the function will return x directly.

Return type:

Tensor

aesara.tensor.nnet.abstract_conv.bilinear_kernel_1D(ratio, normalize=True)[source]

Compute 1D kernel for bilinear upsampling

This function builds the 1D kernel that can be used to upsample a tensor by the given ratio using bilinear interpolation.

Parameters:
  • ratio (int or Constant/ScalarType Aesara tensor of int* dtype) – the ratio by which an image will be upsampled by the returned filter in the 2D space.
  • normalize (bool) – param normalize: indicates whether to normalize the kernel or not. Default is True.
Returns:

the 1D kernels that can be applied to any given image to upsample it by the indicated ratio using bilinear interpolation in one dimension.

Return type:

symbolic 1D tensor

aesara.tensor.nnet.abstract_conv.bilinear_kernel_2D(ratio, normalize=True)[source]

Compute 2D kernel for bilinear upsampling

This function builds the 2D kernel that can be used to upsample a tensor by the given ratio using bilinear interpolation.

Parameters:
  • ratio (int or Constant/ScalarType Aesara tensor of int* dtype) – the ratio by which an image will be upsampled by the returned filter in the 2D space.
  • normalize (bool) – param normalize: indicates whether to normalize the kernel or not. Default is True.
Returns:

the 2D kernels that can be applied to any given image to upsample it by the indicated ratio using bilinear interpolation in two dimensions.

Return type:

symbolic 2D tensor

aesara.tensor.nnet.abstract_conv.bilinear_upsampling(input, ratio=None, frac_ratio=None, batch_size=None, num_input_channels=None, use_1D_kernel=True)[source]

Compute bilinear upsampling This function will build the symbolic graph for upsampling a tensor by the given ratio using bilinear interpolation.

Parameters:
  • input (symbolic 4D tensor) – mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns) that will be upsampled.
  • ratio (int or Constant or ScalarType Tensor of int* dtype) – the ratio by which the input is upsampled in the 2D space (row and col size).
  • frac_ratio (None, tuple of int or tuple of tuples of int) – The tuple defining the fractional ratio by which the input is upsampled in the 2D space. One fractional ratio should be represented as (numerator, denominator). If row and col ratios are different frac_ratio should be a tuple of fractional ratios, i.e a tuple of tuples.
  • use_1D_kernel (bool) – if set to true, row and column will be upsampled separately by 1D kernels, otherwise they are upsampled together using a 2D kernel. The final result is the same, only the speed can differ, given factors such as upsampling ratio.
Returns:

set of feature maps generated by bilinear upsampling. Tensor is of shape (batch size, num_input_channels, input row size * row ratio, input column size * column ratio). Each of these ratios can be fractional.

Return type:

symbolic 4D tensor

Notes

Note:The kernel used for bilinear interpolation is fixed (not learned).
Note:When the upsampling ratio is even, the last row and column is repeated one extra time compared to the first row and column which makes the upsampled tensor asymmetrical on both sides. This does not happen when the upsampling ratio is odd.
Note:This function must get either ratio or frac_ratio as parameter and never both at once.
aesara.tensor.nnet.abstract_conv.border_mode_to_pad(mode, convdim, kshp)[source]

Computes a tuple for padding given the border_mode parameter

Parameters:
  • mode (int or tuple) – One of “valid”, “full”, “half”, an integer, or a tuple where each member is either an integer or a tuple of 2 positive integers.
  • convdim (int) – The dimensionality of the convolution.
  • kshp (List/tuple of length 'convdim', indicating the size of the) – kernel in the spatial dimensions.
Returns:

  • A tuple containing ‘convdim’ elements, each of which is a tuple of
  • two positive integers corresponding to the padding on the left
  • and the right sides respectively.

aesara.tensor.nnet.abstract_conv.causal_conv1d(input, filters, filter_shape, input_shape=None, subsample=1, filter_flip=True, filter_dilation=1, num_groups=1, unshared=False)[source]

Computes (dilated) causal convolution

The output at time t depends only on the inputs till t-1. Used for modelling temporal data. See [WaveNet: A Generative Model for Raw Audio, section 2.1] (https://arxiv.org/abs/1609.03499).

Parameters:
  • input (symbolic 3D tensor) – mini-batch of feature vector stacks, of shape (batch_size, input_channels, input_length) See the optional parameter input_shape
  • filters (symbolic 3D tensor) – Set of filters used in the CNN, of shape (output_channels, input_channels, filter_length)
  • filter_shape ([None/int/Constant] * 2 + [Tensor/int/Constant]) – The shape of the filters parameter. A tuple/list of len 3, with the first two dimensions being None or int or Constant and the last dimension being Tensor or int or Constant. Not optional, since the filter length is needed to calculate the left padding for causality.
  • input_shape (None or [None/int/Constant] * 3) – The shape of the input parameter. None, or a tuple/list of len 3. Optional, possibly used to choose an optimal implementation.
  • subsample (int) – The factor by which to subsample the output. Also called strides elsewhere.
  • filter_dilation (int) – Factor by which to subsample (stride) the input. Also called dilation factor.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
  • unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input.
Returns:

Set of feature vectors generated by convolutional layer. Tensor is of shape (batch_size, output_channels, output_length)

Return type:

Symbolic 3D tensor.

Notes

Note:Currently, this is implemented with the 2D convolution ops.
aesara.tensor.nnet.abstract_conv.check_conv_gradinputs_shape(image_shape, kernel_shape, output_shape, border_mode, subsample, filter_dilation=None)[source]

This function checks if the given image shapes are consistent.

Parameters:
  • image_shape (tuple of int (symbolic or numeric) corresponding to the input) – image shape. Its four (or five) element must correspond respectively to: batch size, number of input channels, height and width (and possibly depth) of the image. None where undefined.
  • kernel_shape (tuple of int (symbolic or numeric) corresponding to the) – kernel shape. Its four (or five) elements must correspond respectively to: number of output channels, number of input channels, height and width (and possibly depth) of the kernel. None where undefined.
  • output_shape (tuple of int (symbolic or numeric) corresponding to the) – output shape. Its four (or five) elements must correspond respectively to: batch size, number of output channels, height and width (and possibly depth) of the output. None where undefined.
  • border_mode (string, int (symbolic or numeric) or tuple of int (symbolic) – or numeric) or pairs of ints. If it is a string, it must be ‘valid’, ‘half’ or ‘full’. If it is a tuple, its two (or three) elements respectively correspond to the padding on height and width (and possibly depth) axis. For asymmetric padding, provide a pair of ints for each dimension.
  • subsample (tuple of int (symbolic or numeric). Its two or three elements) – respectively correspond to the subsampling on height and width (and possibly depth) axis.
  • filter_dilation (tuple of int (symbolic or numeric). Its two or three) – elements correspond respectively to the dilation on height and width axis.
Returns:

  • Returns False if a convolution with the given input shape, kernel shape
  • and parameters would not have produced the given output shape.
  • Returns True in all other cases (if the given output shape matches the)
  • computed output shape, but also if the shape could not be checked because
  • because the shape contains symbolic values.

aesara.tensor.nnet.abstract_conv.conv2d(input, filters, input_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True, image_shape=None, filter_dilation=(1, 1), num_groups=1, unshared=False, **kwargs)[source]

This function will build the symbolic graph for convolving a mini-batch of a stack of 2D inputs with a set of 2D filters. The implementation is modelled after Convolutional Neural Networks (CNN).

Parameters:
  • input (symbolic 4D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). See the optional parameter input_shape.
  • filters (symbolic 4D or 6D tensor) – Set of filters used in CNN layer of shape (output channels, input channels, filter rows, filter columns) for normal convolution and (output channels, output rows, output columns, input channels, filter rows, filter columns) for unshared convolution. See the optional parameter filter_shape.
  • input_shape (None, tuple/list of len 4 or 6 of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • filter_shape (None, tuple/list of len 4 or 6 of int or Constant variable) – The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or a tuple of two ints or pairs of ints) –

    Either of the following:

    'valid': apply filter wherever it completely overlaps with the
    input. Generates output of shape: input shape - filter shape + 1
    'full': apply filter wherever it partly overlaps with the input.
    Generates output of shape: input shape + filter shape - 1
    'half': pad input with a symmetric border of filter rows // 2
    rows and filter columns // 2 columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape.
    int: pad input with a symmetric border of zeros of the given
    width, then perform a valid convolution.
    (int1, int2): (for 2D) pad input with a symmetric border of int1,
    int2, then perform a valid convolution.
    (int1, (int2, int3)) or ((int1, int2), int3): (for 2D)
    pad input with one symmetric border of int1` or int3, and one asymmetric border of (int2, int3) or (int1, int2).
  • subsample (tuple of len 2) – Factor by which to subsample the output. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • image_shape (None, tuple/list of len 4 of int or Constant variable) – Deprecated alias for input_shape.
  • filter_dilation (tuple of len 2) – Factor by which to subsample (stride) the input. Also called dilation elsewhere.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
  • unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input.
  • kwargs (Any other keyword arguments are accepted for backwards) – compatibility, but will be ignored.
Returns:

Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns)

Return type:

Symbolic 4D tensor

aesara.tensor.nnet.abstract_conv.conv2d_grad_wrt_inputs(output_grad, filters, input_shape, filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True, filter_dilation=(1, 1), num_groups=1, unshared=False)[source]

Compute conv output gradient w.r.t its inputs

This function builds the symbolic graph for getting the gradient of the output of a convolution (namely output_grad) w.r.t the input of the convolution, given a set of 2D filters used by the convolution, such that the output_grad is upsampled to the input_shape.

Parameters:
  • output_grad (symbolic 4D tensor) – mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). This is the tensor that will be upsampled or the output gradient of the convolution whose gradient will be taken with respect to the input of the convolution.
  • filters (symbolic 4D or 6D tensor) – Set of filters used in CNN layer of shape (output channels, input channels, filter rows, filter columns) for normal convolution and (output channels, output rows, output columns, input channels, filter rows, filter columns) for unshared convolution. See the optional parameter filter_shape.
  • input_shape ([None/int/Constant] * 2 + [Tensor/int/Constant] * 2) – The shape of the input (upsampled) parameter. A tuple/list of len 4, with the first two dimensions being None or int or Constant and the last two dimensions being Tensor or int or Constant. Not Optional, since given the output_grad shape and the subsample values, multiple input_shape may be plausible.
  • filter_shape (None or [None/int/Constant] * (4 or 6)) – The shape of the filters parameter. None or a tuple/list of len 4 or a tuple/list of len 6 (for unshared convolution) Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or a tuple of two ints or pairs of ints) –

    Either of the following:

    'valid'
    apply filter wherever it completely overlaps with the input. Generates output of shape: input shape - filter shape + 1
    'full'
    apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape - 1
    'half'
    pad input with a symmetric border of filter rows // 2 rows and filter columns // 2 columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. It is known as ‘same’ elsewhere.
    int
    pad input with a symmetric border of zeros of the given width, then perform a valid convolution.
    (int1, int2)
    pad input with a symmetric border of int1 rows and int2 columns, then perform a valid convolution.
    (int1, (int2, int3)) or ((int1, int2), int3)
    pad input with one symmetric border of int1` or int3, and one asymmetric border of (int2, int3) or (int1, int2).
    ((int1, int2), (int3, int4))
    pad input with an asymmetric border of (int1, int2) along one dimension and (int3, int4) along the second dimension.
  • subsample (tuple of len 2) – The subsampling used in the forward pass. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 2) – The filter dilation used in the forward pass. Also known as input striding.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
  • unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input.
Returns:

set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns)

Return type:

symbolic 4D tensor

aesara.tensor.nnet.abstract_conv.conv2d_grad_wrt_weights(input, output_grad, filter_shape, input_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True, filter_dilation=(1, 1), num_groups=1, unshared=False)[source]

Compute conv output gradient w.r.t its weights

This function will build the symbolic graph for getting the gradient of the output of a convolution (output_grad) w.r.t its weights.

Parameters:
  • input (symbolic 4D tensor) – mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). This is the input of the convolution in the forward pass.
  • output_grad (symbolic 4D tensor) – mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). This is the gradient of the output of convolution.
  • filter_shape ([None/int/Constant] * (2 or 4) + [Tensor/int/Constant] * 2) – The shape of the filter parameter. A tuple/list of len 4 or 6 (for unshared), with the first two dimensions being None or int or Constant and the last two dimensions being Tensor or int or Constant. Not Optional, since given the output_grad shape and the input_shape, multiple filter_shape may be plausible.
  • input_shape (None or [None/int/Constant] * 4) – The shape of the input parameter. None or a tuple/list of len 4. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or a tuple of two ints or pairs of ints) –

    Either of the following:

    'valid'
    apply filter wherever it completely overlaps with the input. Generates output of shape: input shape - filter shape + 1
    'full'
    apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape - 1
    'half'
    pad input with a symmetric border of filter rows // 2 rows and filter columns // 2 columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. It is known as ‘same’ elsewhere.
    int
    pad input with a symmetric border of zeros of the given width, then perform a valid convolution.
    (int1, int2)
    pad input with a symmetric border of int1 rows and int2 columns, then perform a valid convolution.
    (int1, (int2, int3)) or ((int1, int2), int3)
    pad input with one symmetric border of int1` or int3, and one asymmetric border of (int2, int3) or (int1, int2).
    ((int1, int2), (int3, int4))
    pad input with an asymmetric border of (int1, int2) along one dimension and (int3, int4) along the second dimension.
  • subsample (tuple of len 2) – The subsampling used in the forward pass of the convolutional operation. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 2) – The filter dilation used in the forward pass. Also known as input striding.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
  • unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input.
Returns:

set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns) for normal convolution and (output channels, output rows, output columns, input channels, filter rows, filter columns) for unshared convolution

Return type:

symbolic 4D tensor or 6D tensor

aesara.tensor.nnet.abstract_conv.conv2d_transpose(input, filters, output_shape, filter_shape=None, border_mode='valid', input_dilation=(1, 1), filter_flip=True, filter_dilation=(1, 1), num_groups=1, unshared=False)[source]

This function will build the symbolic graph for applying a transposed convolution over a mini-batch of a stack of 2D inputs with a set of 2D filters.

Parameters:
  • input (symbolic 4D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). See the optional parameter input_shape.
  • filters (symbolic 4D tensor) – Set of filters used in CNN layer of shape (input channels, output channels, filter rows, filter columns). See the optional parameter filter_shape. Note: the order for ``output_channels`` and ``input_channels`` is reversed with respect to ``conv2d``.
  • output_shape (tuple/list of len 4 of int or Constant variable) – The shape of the output of conv2d_transpose. The last two elements are allowed to be aesara.tensor.type.scalar variables.
  • filter_shape (None, tuple/list of len 4 of int or Constant variable) – The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or tuple of two int) – Refers to the border_mode argument of the corresponding forward (non-transposed) convolution. See the argument description in conv2d. What was padding for the forward convolution means cropping the output of the transposed one. valid corresponds to no cropping, full to maximal cropping.
  • input_dilation (tuple of len 2) – Corresponds to subsample (also called strides elsewhere) in the non-transposed convolution.
  • filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 2) – Factor by which to subsample (stride) the input. Also called dilation elsewhere.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
  • unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input. Grouped unshared convolution is supported.
Returns:

Set of feature maps generated by the transposed convolution. Tensor is of shape (batch size, output channels, output rows, output columns)

Return type:

Symbolic 4D tensor

aesara.tensor.nnet.abstract_conv.conv3d(input, filters, input_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1, 1), filter_flip=True, filter_dilation=(1, 1, 1), num_groups=1)[source]

This function will build the symbolic graph for convolving a mini-batch of a stack of 3D inputs with a set of 3D filters. The implementation is modelled after Convolutional Neural Networks (CNN).

Parameters:
  • input (symbolic 5D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input depth, input rows, input columns). See the optional parameter input_shape.
  • filters (symbolic 5D tensor) – Set of filters used in CNN layer of shape (output channels, input channels, filter depth, filter rows, filter columns). See the optional parameter filter_shape.
  • input_shape (None, tuple/list of len 5 of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • filter_shape (None, tuple/list of len 5 of int or Constant variable) – The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or tuple of three int) –

    Either of the following:

    'valid': apply filter wherever it completely overlaps with the
    input. Generates output of shape: input shape - filter shape + 1
    'full': apply filter wherever it partly overlaps with the input.
    Generates output of shape: input shape + filter shape - 1
    'half': pad input with a symmetric border of filter // 2,
    then perform a valid convolution. For filters with an odd number of slices, rows and columns, this leads to the output shape being equal to the input shape.
    int: pad input with a symmetric border of zeros of the given
    width, then perform a valid convolution.
    (int1, int2, int3)
    pad input with a symmetric border of int1, int2 and int3 columns, then perform a valid convolution.
  • subsample (tuple of len 3) – Factor by which to subsample the output. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filter x, y and z dimensions before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 3) – Factor by which to subsample (stride) the input. Also called dilation elsewhere.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
Returns:

Set of feature maps generated by convolutional layer. Tensor is is of shape (batch size, output channels, output depth, output rows, output columns)

Return type:

Symbolic 5D tensor

aesara.tensor.nnet.abstract_conv.conv3d_grad_wrt_inputs(output_grad, filters, input_shape, filter_shape=None, border_mode='valid', subsample=(1, 1, 1), filter_flip=True, filter_dilation=(1, 1, 1), num_groups=1)[source]

Compute conv output gradient w.r.t its inputs

This function builds the symbolic graph for getting the gradient of the output of a convolution (namely output_grad) w.r.t the input of the convolution, given a set of 3D filters used by the convolution, such that the output_grad is upsampled to the input_shape.

Parameters:
  • output_grad (symbolic 5D tensor) – mini-batch of feature map stacks, of shape (batch size, input channels, input depth, input rows, input columns). This is the tensor that will be upsampled or the output gradient of the convolution whose gradient will be taken with respect to the input of the convolution.
  • filters (symbolic 5D tensor) – set of filters used in CNN layer of shape (output channels, input channels, filter depth, filter rows, filter columns). See the optional parameter filter_shape.
  • input_shape ([None/int/Constant] * 2 + [Tensor/int/Constant] * 2) – The shape of the input (upsampled) parameter. A tuple/list of len 5, with the first two dimensions being None or int or Constant and the last three dimensions being Tensor or int or Constant. Not Optional, since given the output_grad shape and the subsample values, multiple input_shape may be plausible.
  • filter_shape (None or [None/int/Constant] * 5) – The shape of the filters parameter. None or a tuple/list of len 5. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or tuple of three int) –

    Either of the following:

    'valid'
    apply filter wherever it completely overlaps with the input. Generates output of shape: input shape - filter shape + 1
    'full'
    apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape - 1
    'half'
    pad input with a symmetric border of filter // 2, then perform a valid convolution. For filters with an odd number of slices, rows and columns, this leads to the output shape being equal to the input shape. It is known as ‘same’ elsewhere.
    int
    pad input with a symmetric border of zeros of the given width, then perform a valid convolution.
    (int1, int2, int3)
    pad input with a symmetric border of int1, int2 and int3 columns, then perform a valid convolution.
  • subsample (tuple of len 3) – The subsampling used in the forward pass. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filter x, y and z dimensions before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 3) – The filter dilation used in the forward pass. Also known as input striding.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
Returns:

set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output depth, output rows, output columns)

Return type:

symbolic 5D tensor

aesara.tensor.nnet.abstract_conv.conv3d_grad_wrt_weights(input, output_grad, filter_shape, input_shape=None, border_mode='valid', subsample=(1, 1, 1), filter_flip=True, filter_dilation=(1, 1, 1), num_groups=1)[source]

Compute conv output gradient w.r.t its weights

This function will build the symbolic graph for getting the gradient of the output of a convolution (output_grad) w.r.t its weights.

Parameters:
  • input (symbolic 5D tensor) – mini-batch of feature map stacks, of shape (batch size, input channels, input depth, input rows, input columns). This is the input of the convolution in the forward pass.
  • output_grad (symbolic 5D tensor) – mini-batch of feature map stacks, of shape (batch size, input channels, input depth, input rows, input columns). This is the gradient of the output of convolution.
  • filter_shape ([None/int/Constant] * 2 + [Tensor/int/Constant] * 2) – The shape of the filter parameter. A tuple/list of len 5, with the first two dimensions being None or int or Constant and the last three dimensions being Tensor or int or Constant. Not Optional, since given the output_grad shape and the input_shape, multiple filter_shape may be plausible.
  • input_shape (None or [None/int/Constant] * 5) – The shape of the input parameter. None or a tuple/list of len 5. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or tuple of two ints) –

    Either of the following:

    'valid'
    apply filter wherever it completely overlaps with the input. Generates output of shape: input shape - filter shape + 1
    'full'
    apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape - 1
    'half'
    pad input with a symmetric border of filter rows // 2 rows and filter columns // 2 columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. It is known as ‘same’ elsewhere.
    int
    pad input with a symmetric border of zeros of the given width, then perform a valid convolution.
    (int1, int2, int3)
    pad input with a symmetric border of int1, int2 and int3, then perform a valid convolution.
  • subsample (tuple of len 3) – The subsampling used in the forward pass of the convolutional operation. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filters before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 3) – The filter dilation used in the forward pass. Also known as input striding.
  • num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately
Returns:

set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output time, output rows, output columns)

Return type:

symbolic 5D tensor

aesara.tensor.nnet.abstract_conv.frac_bilinear_upsampling(input, frac_ratio)[source]

Compute bilinear upsampling This function will build the symbolic graph for upsampling a tensor by the given ratio using bilinear interpolation.

Parameters:
  • input (symbolic 4D tensor) – mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns) that will be upsampled.
  • frac_ratio (tuple of int or tuple of tuples of int) – The tuple defining the fractional ratio by which the input is upsampled in the 2D space. One fractional ratio should be represented as (numerator, denominator). If row and col ratios are different frac_ratio should be a tuple of fractional ratios, i.e a tuple of tuples.
Returns:

set of feature maps generated by bilinear upsampling. Tensor is of shape (batch size, num_input_channels, input row size * row ratio, input column size * column ratio). Each of these ratios can be fractional.

Return type:

symbolic 4D tensor

Notes

Note:The kernel used for bilinear interpolation is fixed (not learned).
Note:When the upsampling frac_ratio numerator is even, the last row and column is repeated one extra time compared to the first row and column which makes the upsampled tensor asymmetrical on both sides. This does not happen when it is odd.
aesara.tensor.nnet.abstract_conv.get_conv_gradinputs_shape(kernel_shape, top_shape, border_mode, subsample, filter_dilation=None, num_groups=1)[source]

This function tries to compute the image shape of convolution gradInputs.

The image shape can only be computed exactly when subsample is 1. If subsample for a dimension is not 1, this function will return None for that dimension.

Parameters:
  • kernel_shape (tuple of int (symbolic or numeric) corresponding to the) – kernel shape. Its four (or five) elements must correspond respectively to: number of output channels, number of input channels, height and width (and possibly depth) of the kernel. None where undefined.
  • top_shape (tuple of int (symbolic or numeric) corresponding to the top) – image shape. Its four (or five) element must correspond respectively to: batch size, number of output channels, height and width (and possibly depth) of the image. None where undefined.
  • border_mode (string, int (symbolic or numeric) or tuple of int (symbolic) – or numeric) or pairs of ints. If it is a string, it must be ‘valid’, ‘half’ or ‘full’. If it is a tuple, its two (or three) elements respectively correspond to the padding on height and width (and possibly depth) axis. For asymmetric padding, provide a pair of ints for each dimension.
  • subsample (tuple of int (symbolic or numeric). Its two or three elements) – respectively correspond to the subsampling on height and width (and possibly depth) axis.
  • filter_dilation (tuple of int (symbolic or numeric). Its two or three) – elements correspond respectively to the dilation on height and width axis.
  • num_groups (An int which specifies the number of separate groups to) – be divided into.
  • 'unshared' (Note - The shape of the convolution output does not depend on the) – parameter.
Returns:

image_shape – four element must correspond respectively to: batch size, number of output channels, height and width of the image. None where undefined.

Return type:

tuple of int corresponding to the input image shape. Its

aesara.tensor.nnet.abstract_conv.get_conv_gradinputs_shape_1axis(kernel_shape, top_shape, border_mode, subsample, dilation)[source]

This function tries to compute the image shape of convolution gradInputs.

The image shape can only be computed exactly when subsample is 1. If subsample is not 1, this function will return None.

Parameters:
  • kernel_shape (int or None. Corresponds to the kernel shape on a given) – axis. None if undefined.
  • top_shape (int or None. Corresponds to the top shape on a given axis.) – None if undefined.
  • border_mode (string, int or tuple of 2 ints. If it is a string, it must be) – ‘valid’, ‘half’ or ‘full’. If it is an integer, it must correspond to the padding on the considered axis. If it is a tuple, its two elements must correspond to the asymmetric padding (e.g., left and right) on the considered axis.
  • subsample (int. It must correspond to the subsampling on the) – considered axis.
  • dilation (int. It must correspond to the dilation on the) – considered axis.
Returns:

image_shape – given axis. None if undefined.

Return type:

int or None. Corresponds to the input image shape on a

aesara.tensor.nnet.abstract_conv.get_conv_gradweights_shape(image_shape, top_shape, border_mode, subsample, filter_dilation=None, num_groups=1, unshared=False)[source]

This function tries to compute the kernel shape of convolution gradWeights.

The weights shape can only be computed exactly when subsample is 1 and border_mode is not ‘half’. If subsample is not 1 or border_mode is ‘half’, this function will return None.

Parameters:
  • image_shape (tuple of int corresponding to the input image shape. Its) – four (or five) elements must correspond respectively to: batch size, number of output channels, height and width of the image. None where undefined.
  • top_shape (tuple of int (symbolic or numeric) corresponding to the top) – image shape. Its four (or five) element must correspond respectively to: batch size, number of output channels, height and width (and possibly depth) of the image. None where undefined.
  • border_mode (string, int (symbolic or numeric) or tuple of int (symbolic) – or numeric) or pairs of ints. If it is a string, it must be ‘valid’, ‘half’ or ‘full’. If it is a tuple, its two (or three) elements respectively correspond to the padding on height and width (and possibly depth) axis. For asymmetric padding, provide a pair of ints for each dimension.
  • subsample (tuple of int (symbolic or numeric). Its two or three elements) – respectively correspond to the subsampling on height and width (and possibly depth) axis.
  • filter_dilation (tuple of int (symbolic or numeric). Its two or three) – elements correspond respectively to the dilation on height and width axis.
  • num_groups (An int which specifies the number of separate groups to) – be divided into.
  • unshared (Boolean value. If true, unshared convolution will be performed,) – where a different filter is applied to each area of the input.
Returns:

kernel_shape – kernel shape. Its four (or five) elements correspond respectively to: number of output channels, number of input channels, height and width (and possibly depth) of the kernel. None where undefined.

Return type:

tuple of int (symbolic or numeric) corresponding to the

aesara.tensor.nnet.abstract_conv.get_conv_gradweights_shape_1axis(image_shape, top_shape, border_mode, subsample, dilation)[source]

This function tries to compute the image shape of convolution gradWeights.

The weights shape can only be computed exactly when subsample is 1 and border_mode is not ‘half’. If subsample is not 1 or border_mode is ‘half’, this function will return None.

Parameters:
  • image_shape (int or None. Corresponds to the input image shape on a) – given axis. None if undefined.
  • top_shape (int or None. Corresponds to the top shape on a given axis.) – None if undefined.
  • border_mode (string, int or tuple of 2 ints. If it is a string, it must be) – ‘valid’, ‘half’ or ‘full’. If it is an integer, it must correspond to the padding on the considered axis. If it is a tuple, its two elements must correspond to the asymmetric padding (e.g., left and right) on the considered axis.
  • subsample (int. It must correspond to the subsampling on the) – considered axis.
  • dilation (int. It must correspond to the dilation on the) – considered axis.
Returns:

kernel_shape – axis. None if undefined.

Return type:

int or None. Corresponds to the kernel shape on a given

aesara.tensor.nnet.abstract_conv.get_conv_output_shape(image_shape, kernel_shape, border_mode, subsample, filter_dilation=None)[source]

This function compute the output shape of convolution operation.

Parameters:
  • image_shape (tuple of int (symbolic or numeric) corresponding to the input) – image shape. Its four (or five) element must correspond respectively to: batch size, number of input channels, height and width (and possibly depth) of the image. None where undefined.
  • kernel_shape (tuple of int (symbolic or numeric) corresponding to the) – kernel shape. For a normal convolution, its four (for 2D convolution) or five (for 3D convolution) elements must correspond respectively to : number of output channels, number of input channels, height and width (and possibly depth) of the kernel. For an unshared 2D convolution, its six channels must correspond to : number of output channels, height and width of the output, number of input channels, height and width of the kernel. None where undefined.
  • border_mode (string, int (symbolic or numeric) or tuple of int (symbolic) – or numeric) or pairs of ints. If it is a string, it must be ‘valid’, ‘half’ or ‘full’. If it is a tuple, its two (or three) elements respectively correspond to the padding on height and width (and possibly depth) axis. For asymmetric padding, provide a pair of ints for each dimension.
  • subsample (tuple of int (symbolic or numeric). Its two or three elements) – espectively correspond to the subsampling on height and width (and possibly depth) axis.
  • filter_dilation (tuple of int (symbolic or numeric). Its two or three) – elements correspond respectively to the dilation on height and width axis.
  • 'unshared' (Note - The shape of the convolution output does not depend on the) – or the ‘num_groups’ parameters.
Returns:

output_shape – four element must correspond respectively to: batch size, number of output channels, height and width of the image. None where undefined.

Return type:

tuple of int corresponding to the output image shape. Its

aesara.tensor.nnet.abstract_conv.get_conv_shape_1axis(image_shape, kernel_shape, border_mode, subsample, dilation=1)[source]

This function compute the output shape of convolution operation.

Parameters:
  • image_shape (int or None. Corresponds to the input image shape on a) – given axis. None if undefined.
  • kernel_shape (int or None. Corresponds to the kernel shape on a given) – axis. None if undefined.
  • border_mode (string, int or tuple of 2 ints. If it is a string, it must be) – ‘valid’, ‘half’ or ‘full’. If it is an integer, it must correspond to the padding on the considered axis. If it is a tuple, its two elements must correspond to the asymmetric padding (e.g., left and right) on the considered axis.
  • subsample (int. It must correspond to the subsampling on the) – considered axis.
  • dilation (int. It must correspond to the dilation on the) – considered axis.
Returns:

out_shp – considered axis. None if undefined.

Return type:

int corresponding to the output image shape on the

aesara.tensor.nnet.abstract_conv.separable_conv2d(input, depthwise_filters, pointwise_filters, num_channels, input_shape=None, depthwise_filter_shape=None, pointwise_filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True, filter_dilation=(1, 1))[source]

This function will build the symbolic graph for depthwise convolutions which act separately on the input channels followed by pointwise convolution which mixes channels.

Parameters:
  • input (symbolic 4D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). See the optional parameter input_shape.
  • depthwise_filters (symbolic 4D tensor) – Set of filters used depthwise convolution layer of shape (depthwise output channels, 1, filter rows, filter columns).
  • pointwise_filters (symbolic 4D tensor) – Set of filters used pointwise convolution layer of shape (output channels, depthwise output channels, 1, 1).
  • num_channels (int) – The number of channels of the input. Required for depthwise convolutions.
  • input_shape (None, tuple/list of len 4 of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • depthwise_filter_shape (None, tuple/list of len 4 of int or Constant variable) – The shape of the depthwise filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • pointwise_filter_shape (None, tuple/list of len 4 of int or Constant variable) – The shape of the pointwise filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or tuple of two int) –

    This applies only to depthwise convolutions Either of the following:

    'valid': apply filter wherever it completely overlaps with the
    input. Generates output of shape: input shape - filter shape + 1
    'full': apply filter wherever it partly overlaps with the input.
    Generates output of shape: input shape + filter shape - 1
    'half': pad input with a symmetric border of filter rows // 2
    rows and filter columns // 2 columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape.
    int: pad input with a symmetric border of zeros of the given
    width, then perform a valid convolution.
    (int1, int2): pad input with a symmetric border of int1 rows
    and int2 columns, then perform a valid convolution.
    (int1, (int2, int3)) or ((int1, int2), int3):
    pad input with one symmetric border of int1` or int3, and one asymmetric border of (int2, int3) or (int1, int2).
    ((int1, int2), (int3, int4)): pad input with an asymmetric
    border of (int1, int2) along one dimension and (int3, int4) along the second dimension.
  • subsample (tuple of len 2) – Factor by which to subsample the output. This applies only to depthwise convolutions
  • filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 2) – Factor by which to subsample (stride) the input. This applies only to depthwise convolutions
Returns:

Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns)

Return type:

Symbolic 4D tensor

aesara.tensor.nnet.abstract_conv.separable_conv3d(input, depthwise_filters, pointwise_filters, num_channels, input_shape=None, depthwise_filter_shape=None, pointwise_filter_shape=None, border_mode='valid', subsample=(1, 1, 1), filter_flip=True, filter_dilation=(1, 1, 1))[source]

This function will build the symbolic graph for depthwise convolutions which act separately on the input channels followed by pointwise convolution which mixes channels.

Parameters:
  • input (symbolic 5D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input depth, input rows, input columns). See the optional parameter input_shape.
  • depthwise_filters (symbolic 5D tensor) – Set of filters used depthwise convolution layer of shape (depthwise output channels, 1, filter_depth, filter rows, filter columns).
  • pointwise_filters (symbolic 5D tensor) – Set of filters used pointwise convolution layer of shape (output channels, depthwise output channels, 1, 1, 1).
  • num_channels (int) – The number of channels of the input. Required for depthwise convolutions.
  • input_shape (None, tuple/list of len 5 of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • depthwise_filter_shape (None, tuple/list of len 5 of int or Constant variable) – The shape of the depthwise filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • pointwise_filter_shape (None, tuple/list of len 5 of int or Constant variable) – The shape of the pointwise filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time.
  • border_mode (str, int or tuple of three int) –

    This applies only to depthwise convolutions Either of the following:

    'valid': apply filter wherever it completely overlaps with the
    input. Generates output of shape: input shape - filter shape + 1
    'full': apply filter wherever it partly overlaps with the input.
    Generates output of shape: input shape + filter shape - 1
    'half': pad input with a symmetric border of filter // 2,
    then perform a valid convolution. For filters with an odd number of slices, rows and columns, this leads to the output shape being equal to the input shape.
    int: pad input with a symmetric border of zeros of the given
    width, then perform a valid convolution.
    (int1, int2, int3)
    pad input with a symmetric border of int1, int2 and int3 columns, then perform a valid convolution.
  • subsample (tuple of len 3) – This applies only to depthwise convolutions Factor by which to subsample the output. Also called strides elsewhere.
  • filter_flip (bool) – If True, will flip the filter x, y and z dimensions before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation.
  • filter_dilation (tuple of len 3) – Factor by which to subsample (stride) the input. Also called dilation elsewhere.
Returns:

Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output_depth, output rows, output columns)

Return type:

Symbolic 5D tensor