tensor.extra_ops – Tensor Extra Ops

class aesara.tensor.extra_ops.Bartlett[source]
grad(inputs, output_grads)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

make_node(M)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, out_)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.BroadcastTo[source]

An Op for numpy.broadcast_to.

grad(inputs, outputs_gradients)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

make_node(a, *shape)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.CpuContiguous[source]

Check to see if the input is c-contiguous.

If it is, do nothing, else return a contiguous array.

c_code(node, name, inames, onames, sub)[source]

Return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters:
  • node (Apply instance) – The node for which we are compiling the current C code. The same Op may be used in more than one node.
  • name (str) – A name that is automatically assigned and guaranteed to be unique.
  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending "py_" to the name in the list.
  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending "py_" to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.
  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as 'fail').
c_code_cache_version()[source]

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an “unversioned” Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superseded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply

grad(inputs, dout)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

make_node(x)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.CumOp(axis=None, mode='add')[source]
c_code(node, name, inames, onames, sub)[source]

Return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters:
  • node (Apply instance) – The node for which we are compiling the current C code. The same Op may be used in more than one node.
  • name (str) – A name that is automatically assigned and guaranteed to be unique.
  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending "py_" to the name in the list.
  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending "py_" to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.
  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as 'fail').
c_code_cache_version()[source]

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an “unversioned” Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superseded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply

grad(inputs, output_gradients)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

make_node(x)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage, params)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.CumprodOp(*args, **kwargs)[source]
class aesara.tensor.extra_ops.CumsumOp(*args, **kwargs)[source]
class aesara.tensor.extra_ops.DiffOp(n=1, axis=- 1)[source]
grad(inputs, outputs_gradients)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

make_node(x)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.FillDiagonal[source]
grad(inp, cost_grad)[source]

Notes

The gradient is currently implemented for matrices only.

make_node(a, val)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.FillDiagonalOffset[source]
grad(inp, cost_grad)[source]

Notes

The gradient is currently implemented for matrices only.

make_node(a, val, offset)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.RavelMultiIndex(mode='raise', order='C')[source]
make_node(*inp)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inp, out)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.Repeat(axis=None)[source]
grad(inputs, gout)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

make_node(x, repeats)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.SearchsortedOp(side='left')[source]

Wrapper for numpy.searchsorted.

For full documentation, see searchsorted().

See also

searchsorted
numpy-like function that uses SearchsortedOp
c_code(node, name, inames, onames, sub)[source]

Return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters:
  • node (Apply instance) – The node for which we are compiling the current C code. The same Op may be used in more than one node.
  • name (str) – A name that is automatically assigned and guaranteed to be unique.
  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending "py_" to the name in the list.
  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending "py_" to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.
  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as 'fail').
c_code_cache_version()[source]

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an “unversioned” Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superseded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply

c_init_code_struct(node, name, sub)[source]

Return an Apply-specific code string to be inserted in the struct initialization code.

Parameters:
  • node (Apply) – The node in the graph being compiled.
  • name (str) – A unique name to distinguish variables from those of other nodes.
  • sub (dict of str) – A dictionary of values to substitute in the code. Most notably it contains a 'fail' entry that you should place in your code after setting a Python exception to indicate an error.
c_support_code_struct(node, name)[source]

Return Apply-specific utility code for use by an Op that will be inserted at struct scope.

Parameters:
  • node (Apply) – The node in the graph being compiled
  • name (str) – A unique name to distinguish you variables from those of other nodes.
get_params(node)[source]

Try to get parameters for the Op when Op.params_type is set to a ParamsType.

grad(inputs, output_gradients)[source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

make_node(x, v, sorter=None)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage, params)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.Unique(return_index=False, return_inverse=False, return_counts=False, axis=None)[source]

Wraps numpy.unique. This Op is not implemented on the GPU.

Examples

>>> import numpy as np
>>> import aesara
>>> x = aesara.tensor.vector()
>>> f = aesara.function([x], Unique(True, True, False)(x))
>>> f([1, 2., 3, 4, 3, 2, 1.])
[array([ 1.,  2.,  3.,  4.]), array([0, 1, 2, 3]), array([0, 1, 2, 3, 2, 1, 0])]
>>> y = aesara.tensor.matrix()
>>> g = aesara.function([y], Unique(True, True, False)(y))
>>> g([[1, 1, 1.0], (2, 3, 3.0)])
[array([ 1.,  2.,  3.]), array([0, 3, 4]), array([0, 0, 0, 1, 2, 2])]
make_node(x)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inputs, output_storage)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class aesara.tensor.extra_ops.UnravelIndex(order='C')[source]
make_node(indices, dims)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
perform(node, inp, out)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in Op.__props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

aesara.tensor.extra_ops.bartlett(M)[source]

An instance of this class returns the Bartlett spectral window in the time-domain. The Bartlett window is very similar to a triangular window, except that the end points are at zero. It is often used in signal processing for tapering a signal, without generating too much ripple in the frequency domain.

New in version 0.6.

Parameters:M (integer scalar) – Number of points in the output window. If zero or less, an empty vector is returned.
Returns:The triangular window, with the maximum value normalized to one (the value one appears only if the number of samples is odd), with the first and last samples equal to zero.
Return type:vector of doubles
aesara.tensor.extra_ops.bincount(x, weights=None, minlength=None, assert_nonneg=False)[source]

Count number of occurrences of each value in an array of integers.

The number of bins (of size 1) is one larger than the largest value in x. If minlength is specified, there will be at least this number of bins in the output array (though it will be longer if necessary, depending on the contents of x). Each bin gives the number of occurrences of its index value in x. If weights is specified the input array is weighted by it, i.e. if a value n is found at position i, out[n] += weight[i] instead of out[n] += 1.

Parameters:
  • x – A one dimensional array of non-negative integers
  • weights – An array of the same shape as x with corresponding weights. Optional.
  • minlength – A minimum number of bins for the output array. Optional.
  • assert_nonneg – A flag that inserts an assert_op to check if every input x is non-negative. Optional.
  • versionadded: (.) – 0.6:
aesara.tensor.extra_ops.broadcast_arrays(*args: aesara.tensor.var.TensorVariable) Tuple[aesara.tensor.var.TensorVariable, ...][source]

Broadcast any number of arrays against each other.

Parameters:*args – The arrays to broadcast.
aesara.tensor.extra_ops.broadcast_shape(*arrays, **kwargs)[source]

Compute the shape resulting from broadcasting arrays.

Parameters:
  • *arrays (TensorVariable) – The tensor variables, or their shapes (as tuples), for which the broadcast shape is computed.
  • arrays_are_shapes (bool (Optional)) – Indicates whether or not the arrays contains shape tuples. If you use this approach, make sure that the broadcastable dimensions are (scalar) constants with the value 1–or simply the integer 1.
aesara.tensor.extra_ops.broadcast_shape_iter(arrays: Iterable[Union[aesara.tensor.var.TensorVariable, Tuple[aesara.tensor.var.TensorVariable, ...]]], arrays_are_shapes: bool = False)[source]

Compute the shape resulting from broadcasting arrays.

Warning

This function will not make copies, so be careful when calling it with a generator/iterator!

Parameters:
  • arrays – An iterable of tensors, or a tuple of shapes (as tuples), for which the broadcast shape is computed.
  • arrays_are_shapes – Indicates whether or not the arrays contains shape tuples. If you use this approach, make sure that the broadcastable dimensions are (scalar) constants with the value 1–or simply the integer 1.
aesara.tensor.extra_ops.broadcast_to(x: aesara.tensor.var.TensorVariable, shape: Union[aesara.tensor.var.TensorVariable, Tuple[aesara.graph.basic.Variable]]) aesara.tensor.var.TensorVariable[source]

Broadcast an array to a new shape.

Parameters:
  • array – The array to broadcast.
  • shape – The shape of the desired array.
Returns:

A readonly view on the original array with the given shape. It is typically not contiguous. Furthermore, more than one element of a broadcasted array may refer to a single memory location.

Return type:

broadcast

aesara.tensor.extra_ops.compress(condition, x, axis=None)[source]

Return selected slices of an array along given axis.

It returns the input tensor, but with selected slices along a given axis retained. If no axis is provided, the tensor is flattened. Corresponds to numpy.compress

New in version 0.7.

Parameters:
  • condition – One dimensional array of non-zero and zero values corresponding to indices of slices along a selected axis.
  • x – Input data, tensor variable.
  • axis – The axis along which to slice.
Returns:

Return type:

x with selected slices.

aesara.tensor.extra_ops.cumprod(x, axis=None)[source]

Return the cumulative product of the elements along a given axis.

This wraps numpy.cumprod.

Parameters:
  • x – Input tensor variable.
  • axis – The axis along which the cumulative product is computed. The default (None) is to compute the cumprod over the flattened array.

New in version 0.7.

aesara.tensor.extra_ops.cumsum(x, axis=None)[source]

Return the cumulative sum of the elements along a given axis.

This wraps numpy.cumsum.

Parameters:
  • x – Input tensor variable.
  • axis – The axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

New in version 0.7.

aesara.tensor.extra_ops.diff(x, n=1, axis=- 1)[source]

Calculate the n-th order discrete difference along the given axis.

The first order difference is given by out[i] = a[i + 1] - a[i] along the given axis, higher order differences are calculated by using diff recursively. This wraps numpy.diff.

Parameters:
  • x – Input tensor variable.
  • n – The number of times values are differenced, default is 1.
  • axis – The axis along which the difference is taken, default is the last axis.

New in version 0.6.

aesara.tensor.extra_ops.fill_diagonal(a, val)[source]

Returns a copy of an array with all elements of the main diagonal set to a specified scalar value.

New in version 0.6.

Parameters:
  • a – Rectangular array of at least two dimensions.
  • val – Scalar value to fill the diagonal whose type must be compatible with that of array a (i.e. val cannot be viewed as an upcast of a).
Returns:

  • array – An array identical to a except that its main diagonal is filled with scalar val. (For an array a with a.ndim >= 2, the main diagonal is the list of locations a[i, i, ..., i] (i.e. with indices all identical).)
  • Support rectangular matrix and tensor with more than two dimensions
  • if the later have all dimensions are equals.

aesara.tensor.extra_ops.fill_diagonal_offset(a, val, offset)[source]

Returns a copy of an array with all elements of the main diagonal set to a specified scalar value.

Parameters:
  • a – Rectangular array of two dimensions.
  • val – Scalar value to fill the diagonal whose type must be compatible with that of array a (i.e. val cannot be viewed as an upcast of a).
  • offset – Scalar value Offset of the diagonal from the main diagonal. Can be positive or negative integer.
Returns:

An array identical to a except that its offset diagonal is filled with scalar val. The output is unwrapped.

Return type:

array

aesara.tensor.extra_ops.ravel_multi_index(multi_index, dims, mode='raise', order='C')[source]

Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index.

Parameters:
  • multi_index (tuple of Aesara or NumPy arrays) – A tuple of integer arrays, one array for each dimension.
  • dims (tuple of ints) – The shape of array into which the indices from multi_index apply.
  • mode ({'raise', 'wrap', 'clip'}, optional) – Specifies how out-of-bounds indices are handled. Can specify either one mode or a tuple of modes, one mode per index. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range In ‘clip’ mode, a negative index which would normally wrap will clip to 0 instead.
  • order ({'C', 'F'}, optional) – Determines whether the multi-index should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order.
Returns:

raveled_indices – An array of indices into the flattened version of an array of dimensions dims.

Return type:

TensorVariable

See also

unravel_index

aesara.tensor.extra_ops.repeat(x, repeats, axis=None)[source]

Repeat elements of an array.

It returns an array which has the same shape as x, except along the given axis. The axis parameter is used to specify the axis along which values are repeated. By default, a flattened version of x is used.

The number of repetitions for each element is repeats. repeats is broadcasted to fit the length of the given axis.

Parameters:
  • x – Input data, tensor variable.
  • repeats – int, scalar or tensor variable
  • axis (int, optional) –

See also

tensor.tile,

aesara.tensor.extra_ops.searchsorted(x, v, side='left', sorter=None)[source]

Find indices where elements should be inserted to maintain order.

This wraps numpy.searchsorted. Find the indices into a sorted array x such that, if the corresponding elements in v were inserted before the indices, the order of x would be preserved.

Parameters:
  • x (1-D tensor (array-like)) – Input array. If sorter is None, then it must be sorted in ascending order, otherwise sorter must be an array of indices which sorts it.
  • v (tensor (array-like)) – Contains the values to be inserted into x.
  • side ({'left', 'right'}, optional.) – If 'left' (default), the index of the first suitable location found is given. If 'right', return the last such index. If there is no suitable index, return either 0 or N (where N is the length of x).
  • sorter (1-D tensor of integers (array-like), optional) – Contains indices that sort array x into ascending order. They are typically the result of argsort.
Returns:

indices – Array of insertion points with the same shape as v.

Return type:

tensor of integers (int64)

Notes

  • Binary search is used to find the required insertion points.
  • This Op is working only on CPU currently.

Examples

>>> from aesara import tensor as at
>>> from aesara.tensor import extra_ops
>>> x = at.dvector()
>>> idx = x.searchsorted(3)
>>> idx.eval({x: [1,2,3,4,5]})
array(2)
>>> extra_ops.searchsorted([1,2,3,4,5], 3).eval()
array(2)
>>> extra_ops.searchsorted([1,2,3,4,5], 3, side='right').eval()
array(3)
>>> extra_ops.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]).eval()
array([0, 5, 1, 2])

New in version 0.9.

aesara.tensor.extra_ops.squeeze(x, axis=None)[source]

Remove broadcastable dimensions from the shape of an array.

It returns the input array, but with the broadcastable dimensions removed. This is always x itself or a view into x.

New in version 0.6.

Parameters:
  • x – Input data, tensor variable.
  • axis (None or int or tuple of ints, optional) – Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.
Returns:

Return type:

x without its broadcastable dimensions.

aesara.tensor.extra_ops.to_one_hot(y, nb_class, dtype=None)[source]

Return a matrix where each row correspond to the one hot encoding of each element in y.

Parameters:
  • y – A vector of integer value between 0 and nb_class - 1.
  • nb_class (int) – The number of class in y.
  • dtype (data-type) – The dtype of the returned matrix. Default aesara.config.floatX.
Returns:

A matrix of shape (y.shape[0], nb_class), where each row i is the one hot encoding of the corresponding y[i] value.

Return type:

object

aesara.tensor.extra_ops.unique(ar, return_index=False, return_inverse=False, return_counts=False, axis=None)[source]

Find the unique elements of an array.

Returns the sorted unique elements of an array. There are three optional outputs in addition to the unique elements:

  • the indices of the input array that give the unique values
  • the indices of the unique array that reconstruct the input array
  • the number of times each unique value comes up in the input array
aesara.tensor.extra_ops.unravel_index(indices, dims, order='C')[source]

Converts a flat index or array of flat indices into a tuple of coordinate arrays.

Parameters:
  • indices (Aesara or NumPy array) – An integer array whose elements are indices into the flattened version of an array of dimensions dims.
  • dims (tuple of ints) – The shape of the array to use for unraveling indices.
  • order ({'C', 'F'}, optional) – Determines whether the indices should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order.
Returns:

unraveled_coords – Each array in the tuple has the same shape as the indices array.

Return type:

tuple of ndarray