graph
– Interface for the Aesara graph¶
Reference¶
Core graph classes.

class
aesara.graph.basic.
Apply
(op, inputs, outputs)[source]¶ A
Node
representing the application of an operation to inputs.An
Apply
instance serves as a simple structure with three important attributes:inputs
: a list ofVariable
nodes that represent the arguments of the expression,outputs
: a list ofVariable
nodes that represent the computed outputs of the expression, andop
: anOp
instance that determines the nature of the expression being applied.
Basically, an
Apply
instance is an object that represents the Python statementoutputs = op(*inputs)
.This class is typically instantiated by a
Op.make_node
method, which is called byOp.__call__
.The function
aesara.compile.function.function
usesApply.inputs
together withVariable.owner
to search the expression graph and determine which inputs are necessary to compute the function’s outputs.A
Linker
uses theApply
instance’sop
field to compute numeric values for the output variables.Parameters:  op (A Op instance) –
 inputs (list of Variable instances) –
 outputs (list of Variable instances) –
Notes
The
Variable.owner
field of eachApply.outputs
element is set toself
inApply.make_node
.If an output element has an owner that is neither
None
norself
, then aValueError
exception will be raised.
clone
()[source]¶ Duplicate this Apply instance with inputs = self.inputs.
Returns: A new Apply instance (or subclass instance) with new outputs. Return type: object Notes
Tags are copied from self to the returned instance.

clone_with_new_inputs
(inputs, strict=True)[source]¶ Duplicate this
Apply
instance in a new graph.Parameters:  inputs (list of Variables) – List of
Variable
instances to use as inputs.  strict (bool) – If
True
, the type fields of all the inputs must be equal to the current ones (or compatible, for instanceTensor
/GpuArray
of the same dtype and broadcastable patterns, in which case they will be converted into currentType
), and returned outputs are guaranteed to have the same types asself.outputs
. IfFalse
, then there’s no guarantee that the clone’s outputs will have the same types asself.outputs
, and cloning may not even be possible (it depends on theOp
).
Returns: An
Apply
instance with the sameOp
but different outputs.Return type: object
 inputs (list of Variables) – List of

default_output
()[source]¶ Returns the default output for this node.
Returns: An element of self.outputs, typically self.outputs[0]. Return type: Variable instance Notes
May raise AttributeError self.op.default_output is out of range, or if there are multiple outputs and self.op.default_output does not exist.

class
aesara.graph.basic.
Constant
(type, data, name=None)[source]¶ A
Variable
with a fixeddata
field.Constant
nodes make numerous optimizations possible (e.g. constant inlining in C code, constant folding, etc.)Notes
The data field is filtered by what is provided in the constructor for the
Constant
’s type field.
clone
()[source]¶ Create a shallow clone.
We clone this object, but we don’t clone the data to lower memory requirement. We suppose that the data will never change.

get_test_value
()[source]¶ Get the test value.
Raises: TestValueError –


class
aesara.graph.basic.
Node
[source]¶ A
Node
in an Aesara graph.Currently, graphs contain two kinds of
Nodes
:Variable
s andApply
s. Edges in the graph are not explicitly represented. Instead eachNode
keeps track of its parents viaVariable.owner
/Apply.inputs
.

class
aesara.graph.basic.
Variable
(type, owner=None, index=None, name=None)[source]¶ A Variable is a node in an expression graph that represents a variable.
The inputs and outputs of every
Apply
areVariable
instances. The input and output arguments to create afunction
are alsoVariable
instances. AVariable
is like a stronglytyped variable in some other languages; eachVariable
contains a reference to aType
instance that defines the kind of value theVariable
can take in a computation.A
Variable
is a container for four important attributes:type
aType
instance defining the kind of value thisVariable
can have,owner
eitherNone
(for graph roots) or theApply
instance of whichself
is an output,index
the integer such thatowner.outputs[index] is this_variable
(ignored ifowner
isNone
),name
a string to use in prettyprinting and debugging.
There are a few kinds of
Variable
s to be aware of: AVariable
which is the output of a symbolic computation has a reference to theApply
instance to which it belongs (property: owner) and the position of itself in the owner’s output list (property: index).Variable
(this base type) is typically the output of a symbolic computation.Constant
: a subclass which adds a default and unreplaceablevalue
, and requires that owner is None.TensorVariable
subclass ofVariable
that represents anumpy.ndarray
 object.
TensorSharedVariable
: a shared version ofTensorVariable
.SparseVariable
: a subclass ofVariable
that represents ascipy.sparse.{csc,csr}_matrix
object.GpuArrayVariable
: a subclass ofVariable
that represents our object on the GPU that is a subset ofnumpy.ndarray
.RandomVariable
.
A
Variable
which is the output of a symbolic computation will have an owner not equal to None.Using a
Variable
s’ owner field and anApply
node’s inputs fields, one can navigate a graph from an output all the way to the inputs. The opposite direction is possible with aFunctionGraph
and itsFunctionGraph.clients
dict
, which mapsVariable
s to a list of their clients.Parameters:  type (a Type instance) – The type governs the kind of data that can be associated with this variable.
 owner (None or Apply instance) – The
Apply
instance which computes the value for this variable.  index (None or int) – The position of this
Variable
in owner.outputs.  name (None or str) – A string for prettyprinting and debugging.
Examples
import aesara import aesara.tensor as at a = at.constant(1.5) # declare a symbolic constant b = at.fscalar() # declare a symbolic floatingpoint scalar c = a + b # create a simple expression f = aesara.function([b], [c]) # this works because a has a value associated with it already assert 4.0 == f(2.5) # bind 2.5 to an internal copy of b and evaluate an internal c aesara.function([a], [c]) # compilation error because b (required by c) is undefined aesara.function([a,b], [c]) # compilation error because a is constant, it can't be an input
The python variables
a, b, c
all refer to instances of typeVariable
. TheVariable
referred to bya
is also an instance ofConstant
.
clone
()[source]¶ Return a new
Variable
likeself
.Returns: A new Variable
instance (or subclass instance) with no owner or index.Return type: Variable instance Notes
Tags are copied to the returned instance.
Name is copied to the returned instance.

eval
(inputs_to_values=None)[source]¶ Evaluate the
Variable
.Parameters: inputs_to_values – A dictionary mapping Aesara Variable
s to values.Examples
>>> import numpy as np >>> import aesara.tensor as at >>> x = at.dscalar('x') >>> y = at.dscalar('y') >>> z = x + y >>> np.allclose(z.eval({x : 16.3, y : 12.1}), 28.4) True
We passed
eval()
a dictionary mapping symbolic AesaraVariable
s to the values to substitute for them, and it returned the numerical value of the expression.Notes
eval()
will be slow the first time you call it on a variable – it needs to callfunction()
to compile the expression behind the scenes. Subsequent calls toeval()
on that same variable will be fast, because the variable caches the compiled function.This way of computing has more overhead than a normal Aesara function, so don’t use it too much in real scripts.

get_parents
()[source]¶ Return a list of the parents of this node. Should return a copy–i.e., modifying the return value should not modify the graph structure.

get_test_value
()[source]¶ Get the test value.
Raises: TestValueError –

aesara.graph.basic.
ancestors
(graphs: Iterable[aesara.graph.basic.Variable], blockers: Optional[Collection[aesara.graph.basic.Variable]] = None) Generator[aesara.graph.basic.Variable, None, None] [source]¶ Return the variables that contribute to those in given graphs (inclusive).
Parameters: Yields: Variable
s – All input nodes, in the order found by a leftrecursive depthfirst search started at the nodes ingraphs
.

aesara.graph.basic.
applys_between
(ins: Collection[aesara.graph.basic.Variable], outs: Iterable[aesara.graph.basic.Variable]) Generator[aesara.graph.basic.Apply, None, None] [source]¶ Extract the
Apply
s contained within the subgraph between given input and output variables.Parameters: Yields:

aesara.graph.basic.
as_string
(inputs: List[aesara.graph.basic.Variable], outputs: List[aesara.graph.basic.Variable], leaf_formatter=<class 'str'>, node_formatter=<function default_node_formatter>) List[str] [source]¶ Returns a string representation of the subgraph between
inputs
andoutputs
.Parameters: Returns: Returns a string representation of the subgraph between
inputs
andoutputs
. If the same node is used by several other nodes, the first occurrence will be marked as*n > description
and all subsequent occurrences will be marked as*n
, wheren
is an id number (ids are attributed in an unspecified order and only exist for viewing convenience).Return type: list of str

aesara.graph.basic.
clone
(inputs: List[aesara.graph.basic.Variable], outputs: List[aesara.graph.basic.Variable], copy_inputs: bool = True, copy_orphans: Optional[bool] = None) Tuple[Collection[aesara.graph.basic.Variable], Collection[aesara.graph.basic.Variable]] [source]¶ Copies the subgraph contained between inputs and outputs.
Parameters: Returns: Return type: The inputs and outputs of that copy.
Notes
A constant, if in the
inputs
list is not an orphan. So it will be copied conditional on thecopy_inputs
parameter; otherwise, it will be copied conditional on thecopy_orphans
parameter.

aesara.graph.basic.
clone_get_equiv
(inputs: List[aesara.graph.basic.Variable], outputs: List[aesara.graph.basic.Variable], copy_inputs: bool = True, copy_orphans: bool = True, memo: Optional[Dict[aesara.graph.basic.Variable, aesara.graph.basic.Variable]] = None)[source]¶ Return a dictionary that maps from
Variable
andApply
nodes in the original graph to a new node (a clone) in a new graph.This function works by recursively cloning inputs… rebuilding a directed graph from the inputs up to eventually building new outputs.
Parameters:  inputs (a list of Variables) –
 outputs (a list of Variables) –
 copy_inputs (bool) – True means to create the cloned graph from new input nodes (the bottom of a feedupward graph). False means to clone a graph that is rooted at the original input nodes.
 copy_orphans – When
True
, new constant nodes are created. WhenFalse
, original constant nodes are reused in the new graph.  memo (None or dict) – Optionally start with a partlyfilled dictionary for the return value. If a dictionary is passed, this function will work inplace on that dictionary and return it.

aesara.graph.basic.
clone_replace
(output: Collection[aesara.graph.basic.Variable], replace: Optional[Dict[aesara.graph.basic.Variable, aesara.graph.basic.Variable]] = None, strict: bool = True, share_inputs: bool = True) Collection[aesara.graph.basic.Variable] [source]¶ Clone a graph and replace subgraphs within it.
It returns a copy of the initial subgraph with the corresponding substitutions.
Parameters:  output (Aesara Variables (or Aesara expressions)) – Aesara expression that represents the computational graph.
 replace (dict) – Dictionary describing which subgraphs should be replaced by what.
 share_inputs (bool) – If
True
, use the same inputs (and shared variables) as the original graph. IfFalse
, clone them. Note that cloned shared variables still use the same underlying storage, so they will always have the same value.

aesara.graph.basic.
equal_computations
(xs, ys, in_xs=None, in_ys=None)[source]¶ Checks if Aesara graphs represent the same computations.
The two lists
xs
,ys
should have the same number of entries. The function checks if for any corresponding pair(x, y)
fromzip(xs, ys)
x
andy
represent the same computations on the same variables (unless equivalences are provided usingin_xs
,in_ys
).If
in_xs
andin_ys
are provided, then when comparing a nodex
with a nodey
they are automatically considered as equal if there is some indexi
such thatx == in_xs[i]
andy == in_ys[i]
(and they both have the same type). Note thatx
andy
can be in the listxs
andys
, but also represent subgraphs of a computational graph inxs
orys
.Parameters:  xs (list of Variable) –
 ys (list of Variable) –
Returns: Return type: bool

aesara.graph.basic.
general_toposort
(outputs: Iterable[aesara.graph.basic.T], deps: Callable[[aesara.graph.basic.T], Union[aesara.misc.ordered_set.OrderedSet, List[aesara.graph.basic.T]]], compute_deps_cache: Optional[Callable[[aesara.graph.basic.T], Union[aesara.misc.ordered_set.OrderedSet, List[aesara.graph.basic.T]]]] = None, deps_cache: Optional[Dict[aesara.graph.basic.T, List[aesara.graph.basic.T]]] = None, clients: Optional[Dict[aesara.graph.basic.T, List[aesara.graph.basic.T]]] = None) List[aesara.graph.basic.T] [source]¶ Perform a topological sort of all nodes starting from a given node.
Parameters:  deps (callable) – A Python function that takes a node as input and returns its dependence.
 compute_deps_cache (optional) – If provided,
deps_cache
should also be provided. This is a function likedeps
, but that also caches its results in adict
passed asdeps_cache
.  deps_cache (dict) – A
dict
mapping nodes to their children. This is populated bycompute_deps_cache
.  clients (dict) – If a
dict
is passed, it will be filled with a mapping of nodestoclients for each node in the subgraph.
Notes
deps(i)
should behave like a pure function (no funny business with internal state).deps(i)
will be cached by this function (to be fast).The order of the return value list is determined by the order of nodes returned by the
deps
function.The second option removes a Python function call, and allows for more specialized code, so it can be faster.

aesara.graph.basic.
get_var_by_name
(graphs: Iterable[aesara.graph.basic.Variable], target_var_id: str, ids: str = 'CHAR') Tuple[aesara.graph.basic.Variable] [source]¶ Get variables in a graph using their names.
Parameters:  graphs – The graph, or graphs, to search.
 target_var_id – The name to match against either
Variable.name
orVariable.auto_name
.
Returns: Return type: A
tuple
containing all theVariable
s that matchtarget_var_id
.

aesara.graph.basic.
graph_inputs
(graphs: Iterable[aesara.graph.basic.Variable], blockers: Optional[Collection[aesara.graph.basic.Variable]] = None) Generator[aesara.graph.basic.Variable, None, None] [source]¶ Return the inputs required to compute the given Variables.
Parameters: Yields:  Input nodes with no owner, in the order found by a leftrecursive
 depthfirst search started at the nodes in
graphs
.

aesara.graph.basic.
io_connection_pattern
(inputs, outputs)[source]¶ Return the connection pattern of a subgraph defined by given inputs and outputs.

aesara.graph.basic.
io_toposort
(inputs: List[aesara.graph.basic.Variable], outputs: List[aesara.graph.basic.Variable], orderings: Optional[Dict[aesara.graph.basic.Apply, List[aesara.graph.basic.Apply]]] = None, clients: Optional[Dict[aesara.graph.basic.Variable, List[aesara.graph.basic.Variable]]] = None) List[aesara.graph.basic.Apply] [source]¶ Perform topological sort from input and output nodes.
Parameters:  inputs (list or tuple of Variable instances) – Graph inputs.
 outputs (list or tuple of Apply instances) – Graph outputs.
 orderings (dict) – Keys are
Apply
instances, values are lists ofApply
instances.  clients (dict) – If provided, it will be filled with mappings of nodestoclients for each node in the subgraph that is sorted.

aesara.graph.basic.
is_in_ancestors
(l_apply: aesara.graph.basic.Apply, f_node: aesara.graph.basic.Apply) bool [source]¶ Determine if
f_node
is in the graph given byl_apply
.Parameters: Returns: Return type: bool

aesara.graph.basic.
list_of_nodes
(inputs: Collection[aesara.graph.basic.Variable], outputs: Iterable[aesara.graph.basic.Variable]) List[aesara.graph.basic.Apply] [source]¶ Return the
Apply
nodes of the graph betweeninputs
andoutputs
.Parameters:

aesara.graph.basic.
nodes_constructed
()[source]¶ A context manager that is used in
inherit_stack_trace
and keeps track of all the newly created variable nodes inside an optimization. A list ofnew_nodes
is instantiated but will be filled in a lazy manner (whenVariable.notify_construction_observers
is called).observer
is the entity that updates thenew_nodes
list.construction_observers
is a list insideVariable
class and contains a list of observer functions. The observer functions insideconstruction_observers
are only called when aVariable
is instantiated (whereVariable.notify_construction_observers
is called). When the observer function is called, a newVariable
is added to thenew_nodes
list.Parameters:  new_nodes – A list of all the
Variable
s that are created inside the optimization.  yields –
new_nodes
list.
 new_nodes – A list of all the

aesara.graph.basic.
op_as_string
(i, op, leaf_formatter=<class 'str'>, node_formatter=<function default_node_formatter>)[source]¶ Return a function that returns a string representation of the subgraph between
i
andop.inputs

aesara.graph.basic.
orphans_between
(ins: Collection[aesara.graph.basic.Variable], outs: Iterable[aesara.graph.basic.Variable]) Generator[aesara.graph.basic.Variable, None, None] [source]¶ Extract the
Variable
s not within the subgraph between input and output nodes.Parameters: Yields: Variable – The
Variable
s upon which one or moreVariable
s inouts
depend, but are neither inins
nor in the subgraph that lies between them.Examples
>>> orphans_between([x], [(x+y).out]) [y]

aesara.graph.basic.
vars_between
(ins: Collection[aesara.graph.basic.Variable], outs: Iterable[aesara.graph.basic.Variable]) Generator[aesara.graph.basic.Variable, None, None] [source]¶ Extract the
Variable
s within the subgraph between input and output nodes.Parameters: Yields: Variable
s – TheVariable
s that are involved in the subgraph that lies betweenins
andouts
. This includesins
,outs
,orphans_between(ins, outs)
and all values of all intermediary steps fromins
toouts
.

aesara.graph.basic.
view_roots
(node: aesara.graph.basic.Variable) List[aesara.graph.basic.Variable] [source]¶ Return the leaves from a search through consecutive viewmaps.

aesara.graph.basic.
walk
(nodes: Iterable[aesara.graph.basic.T], expand: Callable[[aesara.graph.basic.T], Optional[Sequence[aesara.graph.basic.T]]], bfs: bool = True, return_children: bool = False, hash_fn: Callable[[aesara.graph.basic.T], Hashable] = <builtin function id>) Generator[aesara.graph.basic.T, None, Dict[aesara.graph.basic.T, List[aesara.graph.basic.T]]] [source]¶ Walk through a graph, either breadth or depthfirst.
Parameters:  nodes (deque) – The nodes from which to start walking.
 expand (callable) – A callable that is applied to each node in
nodes
, the results of which are either new nodes to visit orNone
.  bfs (bool) – If
True
, breath first search is used; otherwise, depth first search.  return_children (bool) – If
True
, each output node will be accompanied by the output ofexpand
(i.e. the corresponding child nodes).  hash_fn (callable) – The function used to produce hashes of the elements in
nodes
. The default isid
.
Yields: nodes
Notes
A node will appear at most once in the return value, even if it appears multiple times in the
nodes
parameter.