OpFromGraph

Contents

OpFromGraph#

This page describes aesara.compile.builders.OpFromGraph, an Op constructor that allows one to encapsulate an Aesara graph in a single Op.

This can be used to encapsulate some functionality in one block. It is useful to scale Aesara compilation for regular bigger graphs when we reuse that encapsulated functionality with different inputs many times. Due to this encapsulation, it can make Aesara’s compilation phase faster for graphs with many nodes.

Using this for small graphs is not recommended as it disables rewrites between what is inside the encapsulation and outside of it.

class aesara.compile.builders.OpFromGraph(inputs: List[Variable], outputs: List[Variable], inline: bool = False, lop_overrides: str = 'default', grad_overrides: str = 'default', rop_overrides: str = 'default', connection_pattern: List[List[bool]] | None = None, name: str | None = None, **kwargs)[source]#

This creates an Op from inputs and outputs lists of variables. The signature is similar to aesara.function and the resulting Op’s perform will do the same operation as:

orig_function(inputs, outputs, **kwargs)

Currently does not support updates or givens argument.

Notes

  • We support shared variables in the inner graph. This is automatic and invisible to the user. They can be as input to the node or in the inner graph.

  • We support unused inputs. This is needed for the grad.

  • We support nested OpFromGraph.

  • inline=True will cause better runtime optimization at the cost of compilation time. Currently only works with fast_compile or fast_run mode.

  • For overriding, it’s recommended to provide pure functions (no side effects like setting global variable) as callable(s). The callable(s) supplied for overriding gradient/rop will be called only once at the first call to grad/R_op, and will be converted to OpFromGraph instances.

Examples

Example 1:

from aesara import function, tensor as at
from aesara.compile.builders import OpFromGraph
x, y, z = at.scalars('xyz')
e = x + y * z
op = OpFromGraph([x, y, z], [e])
# op behaves like a normal aesara op
e2 = op(x, y, z) + op(z, y, x)
fn = function([x, y, z], [e2])

Example 2 with shared variable:

import numpy as np
import aesara
from aesara import config, function, tensor as at
from aesara.compile.builders import OpFromGraph

x, y, z = at.scalars('xyz')
s = aesara.shared(np.random.random((2, 2)).astype(config.floatX))
e = x + y * z + s
op = OpFromGraph([x, y, z], [e])
# op behaves like a normal aesara op
e2 = op(x, y, z) + op(z, y, x)
fn = function([x, y, z], [e2])

Example 3 override gradient

from aesara import function, tensor as at, grad
from aesara.compile.builders import OpFromGraph

x, y, z = at.scalars('xyz')
e = x + y * z
def rescale_dy(inps, grads):
    x, y, z = inps
    g, = grads
    return z*2
op = OpFromGraph(
    [x, y, z], [e], grad_overrides=['default', rescale_dy, 'default']
e2 = op(x, y, z)
dx, dy, dz = grad(e2, [x, y, z])
fn = function([x, y, z], [dx, dy, dz])
# the gradient wrt y is now doubled
fn(2., 3., 4.) # [1., 8., 3.]