Tensor creation#
Aesara provides a list of predefined tensor types that can be used
to create a tensor variables. Variables can be named to facilitate debugging,
and all of these constructors accept an optional name
argument.
For example, the following each produce a TensorVariable
instance that stands
for a 0-dimensional ndarray
of integers with the name 'myvar'
:
>>> import aesara.tensor as at
>>> x = at.scalar('myvar', dtype='int32')
>>> x = at.iscalar('myvar')
>>> x = at.tensor(dtype='int32', shape=(), name='myvar')
>>> x = at.TensorType(dtype='int32', shape=())('myvar')
Basic constructors#
These are the simplest and often-preferred methods for creating symbolic
variables in your code. By default, they produce floating-point variables
(with dtype determined by aesara.config.floatX
) so if you use
these constructors it is easy to switch your code between different levels of
floating-point precision.
|
Return a symbolic scalar variable. |
|
Return a symbolic vector variable. |
|
Return a symbolic row variable (i.e. |
|
Return a symbolic column variable (i.e. |
|
Return a symbolic matrix variable. |
|
Return a symbolic 3D variable. |
|
Return a symbolic 4D variable. |
|
Return a symbolic 5D variable. |
|
Return a symbolic 6D variable. |
|
Return a symbolic 7-D variable. |
>>> x = at.scalar()
>>> x.type.shape
()
>>> y = at.vector()
>>> y.type.shape
(None,)
Typed Constructors#
The following TensorType
instances are provided in the aesara.tensor
module.
They are all callable, and accept an optional name
argument. So for example:
x = at.dmatrix() # creates one Variable with no name
x = at.dmatrix('x') # creates one Variable with name 'x'
xyz = at.dmatrix('xyz') # creates one Variable with name 'xyz'
Constructor |
dtype |
ndim |
shape |
broadcastable |
---|---|---|---|---|
bscalar |
int8 |
0 |
() |
() |
bvector |
int8 |
1 |
(?,) |
(False,) |
brow |
int8 |
2 |
(1,?) |
(True, False) |
bcol |
int8 |
2 |
(?,1) |
(False, True) |
bmatrix |
int8 |
2 |
(?,?) |
(False, False) |
btensor3 |
int8 |
3 |
(?,?,?) |
(False, False, False) |
btensor4 |
int8 |
4 |
(?,?,?,?) |
(False, False, False, False) |
btensor5 |
int8 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
btensor6 |
int8 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
btensor7 |
int8 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
wscalar |
int16 |
0 |
() |
() |
wvector |
int16 |
1 |
(?,) |
(False,) |
wrow |
int16 |
2 |
(1,?) |
(True, False) |
wcol |
int16 |
2 |
(?,1) |
(False, True) |
wmatrix |
int16 |
2 |
(?,?) |
(False, False) |
wtensor3 |
int16 |
3 |
(?,?,?) |
(False, False, False) |
wtensor4 |
int16 |
4 |
(?,?,?,?) |
(False, False, False, False) |
wtensor5 |
int16 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
wtensor6 |
int16 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
wtensor7 |
int16 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
iscalar |
int32 |
0 |
() |
() |
ivector |
int32 |
1 |
(?,) |
(False,) |
irow |
int32 |
2 |
(1,?) |
(True, False) |
icol |
int32 |
2 |
(?,1) |
(False, True) |
imatrix |
int32 |
2 |
(?,?) |
(False, False) |
itensor3 |
int32 |
3 |
(?,?,?) |
(False, False, False) |
itensor4 |
int32 |
4 |
(?,?,?,?) |
(False, False, False, False) |
itensor5 |
int32 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
itensor6 |
int32 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
itensor7 |
int32 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
lscalar |
int64 |
0 |
() |
() |
lvector |
int64 |
1 |
(?,) |
(False,) |
lrow |
int64 |
2 |
(1,?) |
(True, False) |
lcol |
int64 |
2 |
(?,1) |
(False, True) |
lmatrix |
int64 |
2 |
(?,?) |
(False, False) |
ltensor3 |
int64 |
3 |
(?,?,?) |
(False, False, False) |
ltensor4 |
int64 |
4 |
(?,?,?,?) |
(False, False, False, False) |
ltensor5 |
int64 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
ltensor6 |
int64 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
ltensor7 |
int64 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
dscalar |
float64 |
0 |
() |
() |
dvector |
float64 |
1 |
(?,) |
(False,) |
drow |
float64 |
2 |
(1,?) |
(True, False) |
dcol |
float64 |
2 |
(?,1) |
(False, True) |
dmatrix |
float64 |
2 |
(?,?) |
(False, False) |
dtensor3 |
float64 |
3 |
(?,?,?) |
(False, False, False) |
dtensor4 |
float64 |
4 |
(?,?,?,?) |
(False, False, False, False) |
dtensor5 |
float64 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
dtensor6 |
float64 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
dtensor7 |
float64 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
fscalar |
float32 |
0 |
() |
() |
fvector |
float32 |
1 |
(?,) |
(False,) |
frow |
float32 |
2 |
(1,?) |
(True, False) |
fcol |
float32 |
2 |
(?,1) |
(False, True) |
fmatrix |
float32 |
2 |
(?,?) |
(False, False) |
ftensor3 |
float32 |
3 |
(?,?,?) |
(False, False, False) |
ftensor4 |
float32 |
4 |
(?,?,?,?) |
(False, False, False, False) |
ftensor5 |
float32 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
ftensor6 |
float32 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
ftensor7 |
float32 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
cscalar |
complex64 |
0 |
() |
() |
cvector |
complex64 |
1 |
(?,) |
(False,) |
crow |
complex64 |
2 |
(1,?) |
(True, False) |
ccol |
complex64 |
2 |
(?,1) |
(False, True) |
cmatrix |
complex64 |
2 |
(?,?) |
(False, False) |
ctensor3 |
complex64 |
3 |
(?,?,?) |
(False, False, False) |
ctensor4 |
complex64 |
4 |
(?,?,?,?) |
(False, False, False, False) |
ctensor5 |
complex64 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
ctensor6 |
complex64 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
ctensor7 |
complex64 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
zscalar |
complex128 |
0 |
() |
() |
zvector |
complex128 |
1 |
(?,) |
(False,) |
zrow |
complex128 |
2 |
(1,?) |
(True, False) |
zcol |
complex128 |
2 |
(?,1) |
(False, True) |
zmatrix |
complex128 |
2 |
(?,?) |
(False, False) |
ztensor3 |
complex128 |
3 |
(?,?,?) |
(False, False, False) |
ztensor4 |
complex128 |
4 |
(?,?,?,?) |
(False, False, False, False) |
ztensor5 |
complex128 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
ztensor6 |
complex128 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
ztensor7 |
complex128 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
Plural Constructors#
There are several constructors that can produce multiple variables at once. These are not frequently used in practice, but often used in tutorial examples to save space!
- iscalars, lscalars, fscalars, dscalars
Return one or more scalar variables.
- ivectors, lvectors, fvectors, dvectors
Return one or more vector variables.
- irows, lrows, frows, drows
Return one or more row variables.
- icols, lcols, fcols, dcols
Return one or more col variables.
- imatrices, lmatrices, fmatrices, dmatrices
Return one or more matrix variables.
Each of these plural constructors accepts
an integer or several strings. If an integer is provided, the method
will return that many Variables
and if strings are provided, it will
create one Variable
for each string, using the string as the Variable
’s
name. For example:
# Creates three matrix `Variable`s with no names
x, y, z = at.dmatrices(3)
# Creates three matrix `Variables` named 'x', 'y' and 'z'
x, y, z = at.dmatrices('x', 'y', 'z')
Custom tensor types#
If you would like to construct a tensor variable with a non-standard
broadcasting pattern, or a larger number of dimensions you’ll need to create
your own TensorType
instance. You create such an instance by passing
the dtype and broadcasting pattern to the constructor. For example, you
can create your own 8-dimensional tensor type
>>> dtensor8 = TensorType(dtype='float64', shape=(None,)*8)
>>> x = dtensor8()
>>> z = dtensor8('z')
You can also redefine some of the provided types and they will interact correctly:
>>> my_dmatrix = TensorType('float64', shape=(None,)*2)
>>> x = my_dmatrix() # allocate a matrix variable
>>> my_dmatrix == dmatrix
True
See TensorType
for more information about creating new types of
tensors.
Converting from Python Objects#
One can convert python objects by calling either aesara.tensor.as_tensor_variable()
or aesara.shared()
.
aesara.tensor.as_tensor_variable
#
Note
This is the default way of converting a python object to a TensorVariable
. Unless you have a need for shared variables, use this function instead.
- aesara.tensor.as_tensor_variable(x, name=None, ndim=None)[source]#
Turn an argument ``x` into a
TensorVariable
orTensorConstant
.Many tensor
Op
s run their arguments through this function as pre-processing. It passes throughTensorVariable
instances, and tries to wrap other objects intoTensorConstant
.When
x
is a Python number, the dtype is inferred as described above.When
x
is alist
ortuple
it is passed throughnp.asarray
If the
ndim
argument is notNone
, it must be an integer and the output will be broadcasted if necessary in order to have this many dimensions.- Return type:
TensorVariable
orTensorConstant
>>> import numpy as np
>>> x = np.array([[1, 2], [3, 4]])
>>> y = at.as_tensor(x)
>>> y.type.shape
(2, 2)
Finally, when you use a NumPy ndarray
or a Python number together with
TensorVariable
instances in arithmetic expressions, the result is a
TensorVariable
. What happens to the ndarray
or the number?
Aesara requires that the inputs to all expressions be Variable
instances, so
Aesara automatically wraps them in a TensorConstant
.
>>> x = at.vector()
>>> b = at.add(x, np.ones(3))
>>> type(b)
<class 'aesara.tensor.var.TensorVariable'>
>>> b.type.shape
(3,)
>>> b.owner.inputs[1]
>>> TensorConstant{(3,) of 1.0}
Note
Aesara makes a copy of any ndarray
that is used in an expression, so
subsequent changes to that ndarray
will not have any effect on the Aesara
expression in which they’re contained.
dtype and shape#
For NumPy ndarrays
the dtype is given, but the static shape/broadcastable pattern must be
inferred. The TensorConstant
is given a type with a matching dtype,
and a static shape/broadcastable pattern with a 1
/True
for every shape
dimension that is one and None
/False
for every dimension with an unknown
shape.
For Python numbers, the static shape/broadcastable pattern is ()
but the dtype must be
inferred. Python integers are stored in the smallest dtype that can hold
them, so small constants like 1
are stored in a bscalar
.
Likewise, Python floats are stored in an fscalar
if fscalar
suffices to hold
them perfectly, but a dscalar
otherwise.
Note
When config.floatX == float32
(see config
), then Python floats
are stored instead as single-precision floats.
For fine control of this rounding policy, see
aesara.tensor.basic.autocast_float
.
Loading from file#
One can also create tensor by loading NumPy arrays from a npy
file.
- aesara.tensor.io.load(path, dtype, shape, mmap_mode=None)[source]#
Load an array from an .npy file.
- Parameters:
path – A Generic symbolic variable, that will contain a string
dtype (data-type) – The data type of the array to be read.
shape – The static shape information of the loaded array.
mmap_mode – How the file will be loaded. None means that the data will be copied into an array in memory, ‘c’ means that the file will be mapped into virtual memory, so only the parts that are needed will be actually read from disk and put into memory. Other modes supported by numpy.load (‘r’, ‘r+’, ‘w+’) cannot be supported by Aesara.
Examples
>>> from aesara import * >>> path = Variable(Generic(), None) >>> x = tensor.load(path, 'int64', (None,)) >>> y = x*2 >>> fn = function([path], y) >>> fn("stored-array.npy") array([0, 2, 4, 6, 8], dtype=int64)