aesara.tensor.clip#
- aesara.tensor.clip = <aesara.tensor.elemwise.Elemwise object>[source]#
Clip x to be between min and max.
Note that when
x
is equal to the boundaries, the output is considered to bex
, so at these points, the gradient of the cost wrt the output will be propagated tox
, not tomin
normax
. In other words, on these points, the gradient wrtx
will be equal to the gradient wrt the output, and the gradient wrtmin
andmax
will be zero.Generalizes a scalar
Op
to tensors.All the inputs must have the same number of dimensions. When the
Op
is performed, for each dimension, each input’s size for that dimension must be the same. As a special case, it can also be one but only if the input’sbroadcastable
flag isTrue
for that dimension. In that case, the tensor is (virtually) replicated along that dimension to match the size of the others.The dtypes of the outputs mirror those of the scalar
Op
that is being generalized to tensors. In particular, if the calculations for an output are done in-place on an input, the output type must be the same as the corresponding input type (see the doc ofScalarOp
to get help about controlling the output type)-
Elemwise(add)
: represents+
on tensorsx + y
-Elemwise(add, {0 : 0})
: represents the+=
operationx += y
-Elemwise(add, {0 : 1})
: represents+=
on the second argumenty += x
-Elemwise(mul)(np.random.random((10, 5)), np.random.random((1, 5)))
: the second input is completed along the first dimension to match the first input -Elemwise(true_divide)(np.random.random(10, 5), np.random.random(10, 1))
: same but along the second dimension -Elemwise(floor_div)(np.random.random((1, 5)), np.random.random((10, 1)))
: the output has size(10, 5)
. -Elemwise(log)(np.random.random((3, 4, 5)))