aesara.tensor.std(input, axis=None, ddof=0, keepdims=False, corrected=False)[source]#

Computes the standard deviation along the given axis(es) of a tensor input.

  • axis (None or int or (list of int) (see Sum)) – Compute the variance along this axis of the tensor. None means all axes (like numpy).

  • ddof (Degrees of freedom; 0 would compute the ML estimate, 1 would compute) – the unbiased estimate.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.

  • corrected (bool) – If this is set to True, the ‘corrected_two_pass’ algorithm is used to compute the variance. Refer :


It calls ‘var()’ and ‘var()’ uses the two-pass algorithm (reference below). Function ‘var()’ also supports ‘corrected_two_pass’ algorithm (using the ‘corrected’ flag) which is numerically more stable. There exist other implementations that offer better stability, but probably slower.