Math

This submodule contains various mathematical functions. Most of them are imported directly from theano.tensor (see there for more details). Doing any kind of math with PyMC3 random variables, or defining custom likelihoods or priors requires you to use these theano expressions rather than NumPy or Python code.

dot(a, b) Computes the dot product of two variables.
constant(x[, name, ndim, dtype]) Return a symbolic Constant with value x.
flatten(x[, ndim, outdim]) Reshapes the variable x by keeping the first outdim-1 dimension size(s) of x the same, and making the last dimension size of x equal to the multiplication of its remaining dimension size(s).
zeros_like(model[, dtype, opt]) equivalent of numpy.zeros_like :param model: :type model: tensor :param dtype: :type dtype: data-type, optional :param opt: Useful for Theano optimization, not for user building a graph as this have the consequence that model isn’t always in the graph.
ones_like(model[, dtype, opt]) equivalent of numpy.ones_like :param model: :type model: tensor :param dtype: :type dtype: data-type, optional :param opt: Useful for Theano optimization, not for user building a graph as this have the consequence that model isn’t always in the graph.
stack(*tensors, **kwargs) Stack tensors in sequence on given axis (default is 0).
concatenate(tensor_list[, axis]) Alias for `join`(axis, *tensor_list).
sum(input[, axis, dtype, keepdims, acc_dtype]) Computes the sum along the given axis(es) of a tensor input.
prod(input[, axis, dtype, keepdims, …]) Computes the product along the given axis(es) of a tensor input.
lt a < b
gt a > b
le a <= b
ge a >= b
eq a == b
neq a != b
switch if cond then ift else iff
clip Clip x to be between min and max.
where if cond then ift else iff
and_ bitwise a & b
or_ bitwise a | b
abs_ |`a`|
exp e^`a`
log base e logarithm of a
cos cosine of a
sin sine of a
tan tangent of a
cosh hyperbolic cosine of a
sinh hyperbolic sine of a
tanh hyperbolic tangent of a
sqr square of a
sqrt square root of a
erf error function
erfinv inverse error function
dot(a, b) Computes the dot product of two variables.
maximum elemwise maximum.
minimum elemwise minimum.
sgn sign of a
ceil ceiling of a
floor floor of a
det Matrix determinant.
matrix_inverse Computes the inverse of a matrix \(A\).
extract_diag Return specified diagonals.
matrix_dot(*args) Shorthand for product between several dots.
trace(X) Returns the sum of diagonal elements of matrix X.
sigmoid Generalizes a scalar op to tensors.
logsumexp(x[, axis])
invlogit(x[, eps]) The inverse of the logit function, 1 / (1 + exp(-x)).
logit(p)
class pymc3.math.BatchedDiag

Fast BatchedDiag allocation

make_node(diag)

Create a “apply” nodes for the inputs in that order.

perform(node, ins, outs, params=None)

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters:
  • node (Apply instance) – Contains the symbolic inputs and outputs.
  • inputs (list) – Sequence of inputs (immutable).
  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises:MethodNotDefined – The subclass does not override this method.
class pymc3.math.BlockDiagonalMatrix(sparse=False, format='csr')
make_node(*matrices)

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output_storage, params=None)

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters:
  • node (Apply instance) – Contains the symbolic inputs and outputs.
  • inputs (list) – Sequence of inputs (immutable).
  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises:MethodNotDefined – The subclass does not override this method.
class pymc3.math.LogDet

Compute the logarithm of the absolute determinant of a square matrix M, log(abs(det(M))) on the CPU. Avoids det(M) overflow/ underflow.

Notes

Once PR #3959 (https://github.com/Theano/Theano/pull/3959/) by harpone is merged, this must be removed.

make_node(x)

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, outputs, params=None)

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters:
  • node (Apply instance) – Contains the symbolic inputs and outputs.
  • inputs (list) – Sequence of inputs (immutable).
  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises:MethodNotDefined – The subclass does not override this method.
pymc3.math.block_diagonal(matrices, sparse=False, format='csr')

See scipy.sparse.block_diag or scipy.linalg.block_diag for reference

Parameters:
  • matrices (tensors) –
  • format (str (default 'csr')) – must be one of: ‘csr’, ‘csc’
  • sparse (bool (default False)) – if True return sparse format
Returns:

matrix

pymc3.math.cartesian(*arrays)

Makes the Cartesian product of arrays.

Parameters:arrays (1D array-like) – 1D arrays where earlier arrays loop more slowly than later ones
pymc3.math.expand_packed_triangular(n, packed, lower=True, diagonal_only=False)

Convert a packed triangular matrix into a two dimensional array.

Triangular matrices can be stored with better space efficiancy by storing the non-zero values in a one-dimensional array. We number the elements by row like this (for lower or upper triangular matrices):

[[0 - - -] [[0 1 2 3]
[1 2 - -] [- 4 5 6] [3 4 5 -] [- - 7 8] [6 7 8 9]] [- - - 9]
Parameters:
  • n (int) – The number of rows of the triangular matrix.
  • packed (theano.vector) – The matrix in packed format.
  • lower (bool, default=True) – If true, assume that the matrix is lower triangular.
  • diagonal_only (bool) – If true, return only the diagonal of the matrix.
pymc3.math.invlogit(x, eps=2.220446049250313e-16)

The inverse of the logit function, 1 / (1 + exp(-x)).

pymc3.math.kron_diag(*diags)

Returns diagonal of a kronecker product.

Parameters:diags (1D arrays) – The diagonals of matrices that are to be Kroneckered
pymc3.math.kron_matrix_op(krons, m, op)

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m)

Parameters:
  • krons (list of square 2D array-like objects) –
    D square matrices [A_1, A_2, …, A_D] to be Kronecker’ed:
    A = A_1 otimes A_2 otimes … otimes A_D

    Product of column dimensions must be N

  • m (NxM array or 1D array (treated as Nx1)) – Object that krons act upon
pymc3.math.kronecker(*Ks)
Return the Kronecker product of arguments:
\(K_1 \otimes K_2 \otimes ... \otimes K_D\)
Parameters:Ks (2D array-like) –
pymc3.math.log1mexp(x)

Return log(1 - exp(-x)).

This function is numerically more stable than the naive approch.

For details, see https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf

pymc3.math.log1pexp(x)

Return log(1 + exp(x)), also called softplus.

This function is numerically more stable than the naive approch.

pymc3.math.logdiffexp(exp(a) - exp(b))
pymc3.math.tround(*args, **kwargs)

Temporary function to silence round warning in Theano. Please remove when the warning disappears.