graph
– Objects and functions for computational graphs¶

class
aesara.graph.op.
HasInnerGraph
[source]¶ A mixin for an
Op
that contain an inner graph.
fgraph
: FunctionGraph[source]¶ A
FunctionGraph
of the inner function.


class
aesara.graph.op.
Op
[source]¶ A class that models and constructs operations in a graph.
A
Op
instance has several responsibilities: construct
Apply
nodes viaOp.make_node()
method,  perform the numeric calculation of the modeled operation via the
Op.perform()
method,  and (optionally) build the gradientcalculating subgraphs via the
Op.grad()
method.
To see how
Op
,Type
,Variable
, andApply
fit together see the page on graph – Interface for the Aesara graph.For more details regarding how these methods should behave: see the
Op Contract
in the sphinx docs (advanced tutorial onOp
making).
L_op
(inputs: Sequence[Variable], outputs: Sequence[Variable], output_grads: Sequence[Variable]) List[Variable] [source]¶ Construct a graph for the Loperator.
The Loperator computes a row vector times the Jacobian.
This method dispatches to
Op.grad()
by default. In one sense, this method provides the original outputs when they’re needed to compute the return value, whereasOp.grad
doesn’t.See
Op.grad
for a mathematical explanation of the inputs and outputs of this method.Parameters:

R_op
(inputs: List[Variable], eval_points: Union[Variable, List[Variable]]) List[Variable] [source]¶ Construct a graph for the Roperator.
This method is primarily used by
Rop
.Parameters:  inputs – The
Op
inputs.  eval_points – A
Variable
or list ofVariable
s with the same length as inputs. Each element ofeval_points
specifies the value of the corresponding input at the point where the Roperator is to be evaluated.
Return type: rval[i]
should beRop(f=f_i(inputs), wrt=inputs, eval_points=eval_points)
. inputs – The

static
add_tag_trace
(thing: T, user_line: Optional[int] = None) T [source]¶ Add tag.trace to a node or variable.
The argument is returned after being affected (inplace).
Parameters:  thing – The object where we add .tag.trace.
 user_line – The max number of user line to keep.
Notes
We also use config.traceback__limit for the maximum number of stack level we look.

default_output
: Optional[int] = None[source]¶ An
int
that specifies which outputOp.__call__()
should return. IfNone
, then all outputs are returned.A subclass should not change this class variable, but instead override it with a subclass variable or an instance variable.

destroy_map
: Dict[int, List[int]] = {}[source]¶ A
dict
that maps output indices to the input indices upon which they operate inplace.Examples
destroy_map = {0: [1]} # first output operates inplace on second input destroy_map = {1: [0]} # second output operates inplace on first input

do_constant_folding
(fgraph: FunctionGraph, node: Apply) bool [source]¶ Determine whether or not constant folding should be performed for the given node.
This allows each
Op
to determine if it wants to be constant folded when all its inputs are constant. This allows it to choose where it puts its memory/speed tradeoff. Also, it could make things faster as constants can’t be used for inplace operations (see*IncSubtensor
).Parameters: node (Apply) – The node for which the constant folding determination is made. Returns: res Return type: bool

get_params
(node: Apply) Params [source]¶ Try to get parameters for the
Op
whenOp.params_type
is set to aParamsType
.

grad
(inputs: Sequence[Variable], output_grads: Sequence[Variable]) List[Variable] [source]¶ Construct a graph for the gradient with respect to each input variable.
Each returned
Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of typeNullType
for that input.Using the reversemode AD characterization given in [1]_, for a representing the function implemented by the
Op
and its two arguments and , given by theVariable
s ininputs
, the values returned byOp.grad
represent the quantities and , for some scalar output term of inParameters:  inputs – The input variables.
 output_grads – The gradients of the output variables.
Returns:  grads – The gradients with respect to each
Variable
ininputs
.  .. [1] Giles, Mike. 2008. “An Extended Collection of Matrix Derivative Results for Forward and Reverse Mode Automatic Differentiation.”

make_node
(*inputs: Variable) Apply [source]¶ Construct an
Apply
node that represent the application of this operation to the given inputs.This must be implemented by subclasses.
Returns: node – The constructed Apply
node.Return type: Apply

make_py_thunk
(node: Apply, storage_map: Dict[Variable, List[Optional[Any]]], compute_map: Dict[Variable, List[bool]], no_recycling: List[Variable], debug: bool = False) ThunkType [source]¶ Make a Python thunk.
Like
Op.make_thunk()
but only makes Python thunks.

make_thunk
(node: Apply, storage_map: Dict[Variable, List[Optional[Any]]], compute_map: Dict[Variable, List[bool]], no_recycling: List[Variable], impl: Optional[str] = None) ThunkType [source]¶ Create a thunk.
This function must return a thunk, that is a zeroarguments function that encapsulates the computation to be performed by this op on the arguments of the node.
Parameters:  node – Something previously returned by
Op.make_node()
.  storage_map – A
dict
mappingVariable
s to singleelement lists where a computed value for eachVariable
may be found.  compute_map – A
dict
mappingVariable
s to singleelement lists where a boolean value can be found. The boolean indicates whether theVariable
’sstorage_map
container contains a valid value (i.e.True
) or whether it has not been computed yet (i.e.False
).  no_recycling – List of
Variable
s for which it is forbidden to reuse memory allocated by a previous call.  impl (str) – Description for the type of node created (e.g.
"c"
,"py"
, etc.)
Notes
If the thunk consults the
storage_map
on every call, it is safe for it to ignore theno_recycling
argument, because elements of theno_recycling
list will have a value ofNone
in thestorage_map
. If the thunk can potentially cache return values (likeCLinker
does), then it must not do so for variables in theno_recycling
list.Op.prepare_node()
is always called. If it tries'c'
and it fails, then it tries'py'
, andOp.prepare_node()
will be called twice. node – Something previously returned by

abstract
perform
(node: Apply, inputs: Sequence[Any], output_storage: List[List[Optional[Any]]], params: Optional[Tuple[Any]] = None) None [source]¶ Calculate the function on the inputs and put the variables in the output storage.
Parameters:  node – The symbolic
Apply
node that represents this computation.  inputs – Immutable sequence of nonsymbolic/numeric inputs. These
are the values of each
Variable
innode.inputs
.  output_storage – List of mutable singleelement lists (do not change the length of
these lists). Each sublist corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sublists.  params – A tuple containing the values of each entry in
Op.__props__
.
Notes
The
output_storage
list might contain data. If an element of output_storage is notNone
, it has to be of the right type, for instance, for aTensorVariable
, it has to be a NumPyndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such preset values were produced by a previous call to thisOp.perform()
; they could’ve been allocated by anotherOp
’sperform
method. AnOp
is free to reuseoutput_storage
as it sees fit, or to discard it and allocate new memory. node – The symbolic

prepare_node
(node: Apply, storage_map: Optional[Dict[Variable, List[Optional[Any]]]], compute_map: Optional[Dict[Variable, List[bool]]], impl: Optional[str]) None [source]¶ Make any special modifications that the
Op
needs before doingOp.make_thunk()
.This can modify the node inplace and should return nothing.
It can be called multiple time with different
impl
values.Warning
It is the
Op
’s responsibility to not reprepare the node when it isn’t good to do so.
 construct

aesara.graph.op.
compute_test_value
(node: Apply)[source]¶ Computes the test value of a node.
Parameters: node (Apply) – The Apply
node for which the test value is computed.Returns: The tag.test_value
s are updated in eachVariable
innode.outputs
.Return type: None

aesara.graph.op.
get_test_value
(v: Any) Any [source]¶ Get the test value for
v
.If input
v
is not already a variable, it is turned into one by callingas_tensor_variable(v)
.Raises: AttributeError` if no test value is set –

aesara.graph.op.
get_test_values
(*args: Variable) Union[Any, List[Any]] [source]¶ Get test values for multiple
Variable
s.Intended use:
for val_1, ..., val_n in get_debug_values(var_1, ..., var_n): if some condition on val_1, ..., val_n is not met: missing_test_message("condition was not met")
Given a list of variables,
get_debug_values
does one of three things:If the interactive debugger is off, returns an empty list
If the interactive debugger is on, and all variables have debug values, returns a list containing a single element. This single element is either:
 if there is only one variable, the element is its
 value
 otherwise, a tuple containing debug values of all
 the variables.
If the interactive debugger is on, and some variable does not have a debug value, issue a
missing_test_message
about the variable, and, if still in control of execution, return an empty list.

aesara.graph.op.
missing_test_message
(msg: str) None [source]¶ Display a message saying that some test_value is missing.
This uses the appropriate form based on
config.compute_test_value
: off:
 The interactive debugger is off, so we do nothing.
 ignore:
 The interactive debugger is set to ignore missing inputs, so do nothing.
 warn:
 Display
msg
as a warning.
Raises: AttributeError – With msg as the exception text.