# Math¶

This submodule contains various mathematical functions. Most of them are imported directly from theano.tensor (see there for more details). Doing any kind of math with PyMC3 random variables, or defining custom likelihoods or priors requires you to use these theano expressions rather than NumPy or Python code.

 dot(l, r) Return a symbolic dot product. constant(x[, name, ndim, dtype]) Return a TensorConstant with value x. flatten(x[, ndim]) Return a copy of the array collapsed into one dimension. zeros_like(model[, dtype, opt]) equivalent of numpy.zeros_like :param model: :type model: tensor :param dtype: :type dtype: data-type, optional :param opt: Useful for Theano optimization, not for user building a graph as this have the consequence that model isn't always in the graph. :type opt: If True, we will return a constant instead of a graph when possible. ones_like(model[, dtype, opt]) equivalent of numpy.ones_like :param model: :type model: tensor :param dtype: :type dtype: data-type, optional :param opt: Useful for Theano optimization, not for user building a graph as this have the consequence that model isn't always in the graph. :type opt: If True, we will return a constant instead of a graph when possible. stack(*tensors, **kwargs) Stack tensors in sequence on given axis (default is 0). concatenate(tensor_list[, axis]) Alias for join(axis, *tensor_list). sum(input[, axis, dtype, keepdims, acc_dtype]) Computes the sum along the given axis(es) of a tensor input. prod(input[, axis, dtype, keepdims, ...]) Computes the product along the given axis(es) of a tensor input. lt a < b gt a > b le a <= b ge a >= b eq a == b neq a != b switch if cond then ift else iff clip Clip x to be between min and max. where if cond then ift else iff and_ bitwise a & b or_ bitwise a | b abs_ |a| exp e^a log base e logarithm of a cos cosine of a sin sine of a tan tangent of a cosh hyperbolic cosine of a sinh hyperbolic sine of a tanh hyperbolic tangent of a sqr square of a sqrt square root of a erf error function erfinv inverse error function dot(l, r) Return a symbolic dot product. maximum elemwise maximum. minimum elemwise minimum. sgn sign of a ceil ceiling of a floor floor of a det Matrix determinant. matrix_inverse Computes the inverse of a matrix $$A$$. extract_diag Return specified diagonals. matrix_dot(*args) Shorthand for product between several dots. trace(X) Returns the sum of diagonal elements of matrix X. sigmoid Generalizes a scalar op to tensors. logsumexp(x[, axis, keepdims]) invlogit(x[, eps]) The inverse of the logit function, 1 / (1 + exp(-x)). logit(p)
class pymc3.math.BatchedDiag

Fast BatchedDiag allocation

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters
inputslist of Variable

The input variables.

The gradients of the output variables.

Returns

The gradients with respect to each Variable in inputs.

make_node(diag)

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns
node: Apply

The constructed Apply node.

perform(node, ins, outs, params=None)

Calculate the function on the inputs and put the variables in the output storage.

Parameters
nodeApply

The symbolic Apply node that represents this computation.

inputsSequence

Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

output_storagelist of list

List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

paramstuple

A tuple containing the values of each entry in __props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class pymc3.math.BlockDiagonalMatrix(sparse=False, format='csr')

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters
inputslist of Variable

The input variables.

The gradients of the output variables.

Returns

The gradients with respect to each Variable in inputs.

make_node(*matrices)

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns
node: Apply

The constructed Apply node.

perform(node, inputs, output_storage, params=None)

Calculate the function on the inputs and put the variables in the output storage.

Parameters
nodeApply

The symbolic Apply node that represents this computation.

inputsSequence

Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

output_storagelist of list

List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

paramstuple

A tuple containing the values of each entry in __props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class pymc3.math.LogDet

Compute the logarithm of the absolute determinant of a square matrix M, log(abs(det(M))) on the CPU. Avoids det(M) overflow/ underflow.

Notes

Once PR #3959 (https://github.com/Theano/Theano/pull/3959/) by harpone is merged, this must be removed.

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters
inputslist of Variable

The input variables.

The gradients of the output variables.

Returns

The gradients with respect to each Variable in inputs.

make_node(x)

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns
node: Apply

The constructed Apply node.

perform(node, inputs, outputs, params=None)

Calculate the function on the inputs and put the variables in the output storage.

Parameters
nodeApply

The symbolic Apply node that represents this computation.

inputsSequence

Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

output_storagelist of list

List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

paramstuple

A tuple containing the values of each entry in __props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

pymc3.math.block_diagonal(matrices, sparse=False, format='csr')

See scipy.sparse.block_diag or scipy.linalg.block_diag for reference

Parameters
matrices: tensors
format: str (default ‘csr’)

must be one of: ‘csr’, ‘csc’

sparse: bool (default False)

if True return sparse format

Returns
matrix
pymc3.math.cartesian(*arrays)

Makes the Cartesian product of arrays.

Parameters
arrays: N-D array-like

N-D arrays where earlier arrays loop more slowly than later ones

pymc3.math.expand_packed_triangular(n, packed, lower=True, diagonal_only=False)

Convert a packed triangular matrix into a two dimensional array.

Triangular matrices can be stored with better space efficiancy by storing the non-zero values in a one-dimensional array. We number the elements by row like this (for lower or upper triangular matrices):

[[0 - - -] [[0 1 2 3]

[1 2 - -] [- 4 5 6] [3 4 5 -] [- - 7 8] [6 7 8 9]] [- - - 9]

Parameters
n: int

The number of rows of the triangular matrix.

packed: theano.vector

The matrix in packed format.

lower: bool, default=True

If true, assume that the matrix is lower triangular.

diagonal_only: bool

If true, return only the diagonal of the matrix.

pymc3.math.invlogit(x, eps=2.220446049250313e-16)

The inverse of the logit function, 1 / (1 + exp(-x)).

pymc3.math.kron_diag(*diags)

Returns diagonal of a kronecker product.

Parameters
diags: 1D arrays

The diagonals of matrices that are to be Kroneckered

pymc3.math.kron_dot(krons, m, *, op=<function dot>)

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m)

Parameters
kronslist of square 2D array-like objects

D square matrices $$[A_1, A_2, ..., A_D]$$ to be Kronecker’ed $$A = A_1 \otimes A_2 \otimes ... \otimes A_D$$ Product of column dimensions must be $$N$$

mNxM array or 1D array (treated as Nx1)

Object that krons act upon

Returns
numpy array
pymc3.math.kron_matrix_op(krons, m, op)

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m)

Parameters
kronslist of square 2D array-like objects

D square matrices $$[A_1, A_2, ..., A_D]$$ to be Kronecker’ed $$A = A_1 \otimes A_2 \otimes ... \otimes A_D$$ Product of column dimensions must be $$N$$

mNxM array or 1D array (treated as Nx1)

Object that krons act upon

Returns
numpy array
pymc3.math.kron_solve_lower(krons, m, *, op=Solve{('lower_triangular', True, False, False)})

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m)

Parameters
kronslist of square 2D array-like objects

D square matrices $$[A_1, A_2, ..., A_D]$$ to be Kronecker’ed $$A = A_1 \otimes A_2 \otimes ... \otimes A_D$$ Product of column dimensions must be $$N$$

mNxM array or 1D array (treated as Nx1)

Object that krons act upon

Returns
numpy array
pymc3.math.kron_solve_upper(krons, m, *, op=Solve{('upper_triangular', False, False, False)})

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m)

Parameters
kronslist of square 2D array-like objects

D square matrices $$[A_1, A_2, ..., A_D]$$ to be Kronecker’ed $$A = A_1 \otimes A_2 \otimes ... \otimes A_D$$ Product of column dimensions must be $$N$$

mNxM array or 1D array (treated as Nx1)

Object that krons act upon

Returns
numpy array
pymc3.math.kronecker(*Ks)
Return the Kronecker product of arguments:

$$K_1 \otimes K_2 \otimes ... \otimes K_D$$

Parameters
KsIterable of 2D array-like

Arrays of which to take the product.

Returns
np.ndarray :

Block matrix Kroncker product of the argument matrices.

pymc3.math.log1mexp(x)

Return log(1 - exp(-x)).

This function is numerically more stable than the naive approach.

pymc3.math.log1mexp_numpy(x)

Return log(1 - exp(-x)). This function is numerically more stable than the naive approach. For details, see https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf

pymc3.math.log1pexp(x)

Return log(1 + exp(x)), also called softplus.

This function is numerically more stable than the naive approach.

pymc3.math.logdiffexp(exp(a) - exp(b))
pymc3.math.logdiffexp_numpy(exp(a) - exp(b))
pymc3.math.tround(*args, **kwargs)

Temporary function to silence round warning in Theano. Please remove when the warning disappears.