GP Implementations

class pymc3.gp.gp.Latent(mean_func=<pymc3.gp.mean.Zero object>, cov_func=<pymc3.gp.cov.Constant object>)

Latent Gaussian process.

The gp.Latent class is a direct implementation of a GP. No addiive noise is assumed. It is called “Latent” because the underlying function values are treated as latent variables. It has a prior method and a conditional method. Given a mean and covariance function the function \(f(x)\) is modeled as,

\[f(x) \sim \mathcal{GP}\left(\mu(x), k(x, x')\right)\]

Use the prior and conditional methods to actually construct random variables representing the unknown, or latent, function whose distribution is the GP prior or GP conditional. This GP implementation can be used to implement regression on data that is not normally distributed. For more information on the prior and conditional methods, see their docstrings.

Parameters:
  • cov_func (None, 2D array, or instance of Covariance) – The covariance function. Defaults to zero.
  • mean_func (None, instance of Mean) – The mean function. Defaults to zero.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]

with pm.Model() as model:
    # Specify the covariance function.
    cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.Latent(cov_func=cov_func)

    # Place a GP prior over the function f.
    f = gp.prior("f", X=X)

...

# After fitting or sampling, specify the distribution
# at new points with .conditional
Xnew = np.linspace(-1, 2, 50)[:, None]

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)
conditional(name, Xnew, given=None, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew.

Given a set of function values f that the GP prior was over, the conditional distribution over a set of new points, f_* is

\[f_* \mid f, X, X_* \sim \mathcal{GP}\left( K(X_*, X) K(X, X)^{-1} f \,, K(X_*, X_*) - K(X_*, X) K(X, X)^{-1} K(X, X_*) \right)\]
Parameters:
  • name (string) – Name of the random variable
  • Xnew (array-like) – Function input values.
  • given (dict) – Can optionally take as key value pairs: X, y, noise, and gp. See the section in the documentation on additive GP models in PyMC3 for more information.
  • **kwargs – Extra keyword arguments that are passed to MvNormal distribution constructor.
prior(name, X, reparameterize=True, **kwargs)

Returns the GP prior distribution evaluated over the input locations X.

This is the prior probability over the space of functions described by its mean and covariance function.

\[f \mid X \sim \text{MvNormal}\left( \mu(X), k(X, X') \right)\]
Parameters:
  • name (string) – Name of the random variable
  • X (array-like) – Function input values.
  • reparameterize (bool) – Reparameterize the distribution by rotating the random variable by the Cholesky factor of the covariance matrix.
  • **kwargs – Extra keyword arguments that are passed to distribution constructor.
class pymc3.gp.gp.Marginal(mean_func=<pymc3.gp.mean.Zero object>, cov_func=<pymc3.gp.cov.Constant object>)

Marginal Gaussian process.

The gp.Marginal class is an implementation of the sum of a GP prior and additive noise. It has marginal_likelihood, conditional and predict methods. This GP implementation can be used to implement regression on data that is normally distributed. For more information on the prior and conditional methods, see their docstrings.

Parameters:
  • cov_func (None, 2D array, or instance of Covariance) – The covariance function. Defaults to zero.
  • mean_func (None, instance of Mean) – The mean function. Defaults to zero.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]

with pm.Model() as model:
    # Specify the covariance function.
    cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.Marginal(cov_func=cov_func)

    # Place a GP prior over the function f.
    sigma = pm.HalfCauchy("sigma", beta=3)
    y_ = gp.marginal_likelihood("y", X=X, y=y, noise=sigma)

...

# After fitting or sampling, specify the distribution
# at new points with .conditional
Xnew = np.linspace(-1, 2, 50)[:, None]

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)
conditional(name, Xnew, pred_noise=False, given=None, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew.

Given a set of function values f that the GP prior was over, the conditional distribution over a set of new points, f_* is:

\[f_* \mid f, X, X_* \sim \mathcal{GP}\left( K(X_*, X) [K(X, X) + K_{n}(X, X)]^{-1} f \,, K(X_*, X_*) - K(X_*, X) [K(X, X) + K_{n}(X, X)]^{-1} K(X, X_*) \right)\]
Parameters:
  • name (string) – Name of the random variable
  • Xnew (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • pred_noise (bool) – Whether or not observation noise is included in the conditional. Default is False.
  • given (dict) – Can optionally take as key value pairs: X, y, noise, and gp. See the section in the documentation on additive GP models in PyMC3 for more information.
  • **kwargs – Extra keyword arguments that are passed to MvNormal distribution constructor.
marginal_likelihood(name, X, y, noise, is_observed=True, **kwargs)

Returns the marginal likelihood distribution, given the input locations X and the data y.

This is integral over the product of the GP prior and a normal likelihood.

\[y \mid X,\theta \sim \int p(y \mid f,\, X,\, \theta) \, p(f \mid X,\, \theta) \, df\]
Parameters:
  • name (string) – Name of the random variable
  • X (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • y (array-like) – Data that is the sum of the function with the GP prior and Gaussian noise. Must have shape (n, ).
  • noise (scalar, Variable, or Covariance) – Standard deviation of the Gaussian noise. Can also be a Covariance for non-white noise.
  • is_observed (bool) – Whether to set y as an observed variable in the model. Default is True.
  • **kwargs – Extra keyword arguments that are passed to MvNormal distribution constructor.
predict(Xnew, point=None, diag=False, pred_noise=False, given=None)

Return the mean vector and covariance matrix of the conditional distribution as numpy arrays, given a point, such as the MAP estimate or a sample from a trace.

Parameters:
  • Xnew (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • point (pymc3.model.Point) – A specific point to condition on.
  • diag (bool) – If True, return the diagonal instead of the full covariance matrix. Default is False.
  • pred_noise (bool) – Whether or not observation noise is included in the conditional. Default is False.
  • given (dict) – Same as conditional method.
predictt(Xnew, diag=False, pred_noise=False, given=None)

Return the mean vector and covariance matrix of the conditional distribution as symbolic variables.

Parameters:
  • Xnew (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • diag (bool) – If True, return the diagonal instead of the full covariance matrix. Default is False.
  • pred_noise (bool) – Whether or not observation noise is included in the conditional. Default is False.
  • given (dict) – Same as conditional method.
class pymc3.gp.gp.TP(mean_func=<pymc3.gp.mean.Zero object>, cov_func=<pymc3.gp.cov.Constant object>, nu=None)

Student’s T process prior.

The usage is nearly identical to that of gp.Latent. The differences are that it must be initialized with a degrees of freedom parameter, and TP is not additive. Given a mean and covariance function, and a degrees of freedom parameter, the function \(f(x)\) is modeled as,

\[f(X) \sim \mathcal{TP}\left( \mu(X), k(X, X'), \nu \right)\]
Parameters:
  • cov_func (None, 2D array, or instance of Covariance) – The covariance function. Defaults to zero.
  • mean_func (None, instance of Mean) – The mean function. Defaults to zero.
  • nu (float) – The degrees of freedom

References

  • Shah, A., Wilson, A. G., and Ghahramani, Z. (2014). Student-t Processes as Alternatives to Gaussian Processes. arXiv preprint arXiv:1402.4306.
conditional(name, Xnew, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew.

Given a set of function values f that the TP prior was over, the conditional distribution over a set of new points, f_* is

Parameters:
  • name (string) – Name of the random variable
  • Xnew (array-like) – Function input values.
  • **kwargs – Extra keyword arguments that are passed to MvNormal distribution constructor.
prior(name, X, reparameterize=True, **kwargs)

Returns the TP prior distribution evaluated over the input locations X.

This is the prior probability over the space of functions described by its mean and covariance function.

Parameters:
  • name (string) – Name of the random variable
  • X (array-like) – Function input values.
  • reparameterize (bool) – Reparameterize the distribution by rotating the random variable by the Cholesky factor of the covariance matrix.
  • **kwargs – Extra keyword arguments that are passed to distribution constructor.
class pymc3.gp.gp.MarginalSparse(mean_func=<pymc3.gp.mean.Zero object>, cov_func=<pymc3.gp.cov.Constant object>, approx='FITC')

Approximate marginal Gaussian process.

The gp.MarginalSparse class is an implementation of the sum of a GP prior and additive noise. It has marginal_likelihood, conditional and predict methods. This GP implementation can be used to implement regression on data that is normally distributed. The available approximations are:

  • DTC: Deterministic Training Conditional
  • FITC: Fully independent Training Conditional
  • VFE: Variational Free Energy
Parameters:
  • cov_func (None, 2D array, or instance of Covariance) – The covariance function. Defaults to zero.
  • mean_func (None, instance of Mean) – The mean function. Defaults to zero.
  • approx (string) – The approximation to use. Must be one of VFE, FITC or DTC.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]

# A smaller set of inducing inputs
Xu = np.linspace(0, 1, 5)[:, None]

with pm.Model() as model:
    # Specify the covariance function.
    cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.MarginalSparse(cov_func=cov_func, approx="FITC")

    # Place a GP prior over the function f.
    sigma = pm.HalfCauchy("sigma", beta=3)
    y_ = gp.marginal_likelihood("y", X=X, Xu=Xu, y=y, sigma=sigma)

...

# After fitting or sampling, specify the distribution
# at new points with .conditional
Xnew = np.linspace(-1, 2, 50)[:, None]

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)

References

  • Quinonero-Candela, J., and Rasmussen, C. (2005). A Unifying View of Sparse Approximate Gaussian Process Regression.
  • Titsias, M. (2009). Variational Learning of Inducing Variables in Sparse Gaussian Processes.
conditional(name, Xnew, pred_noise=False, given=None, **kwargs)

Returns the approximate conditional distribution of the GP evaluated over new input locations Xnew.

Parameters:
  • name (string) – Name of the random variable
  • Xnew (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • pred_noise (bool) – Whether or not observation noise is included in the conditional. Default is False.
  • given (dict) – Can optionally take as key value pairs: X, Xu, y, noise, and gp. See the section in the documentation on additive GP models in PyMC3 for more information.
  • **kwargs – Extra keyword arguments that are passed to MvNormal distribution constructor.
marginal_likelihood(name, X, Xu, y, noise=None, is_observed=True, **kwargs)

Returns the approximate marginal likelihood distribution, given the input locations X, inducing point locations Xu, data y, and white noise standard deviations sigma.

Parameters:
  • name (string) – Name of the random variable
  • X (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • Xu (array-like) – The inducing points. Must have the same number of columns as X.
  • y (array-like) – Data that is the sum of the function with the GP prior and Gaussian noise. Must have shape (n, ).
  • noise (scalar, Variable) – Standard deviation of the Gaussian noise.
  • is_observed (bool) – Whether to set y as an observed variable in the model. Default is True.
  • **kwargs – Extra keyword arguments that are passed to MvNormal distribution constructor.
class pymc3.gp.gp.MarginalKron(mean_func=<pymc3.gp.mean.Zero object>, cov_funcs=<pymc3.gp.cov.Constant object>)

Marginal Gaussian process whose covariance is a tensor product kernel.

The gp.MarginalKron class is an implementation of the sum of a Kronecker GP prior and additive white noise. It has marginal_likelihood, conditional and predict methods. This GP implementation can be used to efficiently implement regression on data that are normally distributed with a tensor product kernel and are measured on a full grid of inputs: cartesian(*Xs). MarginalKron is based on the KroneckerNormal distribution, see its docstring for more information. For more information on the prior and conditional methods, see their docstrings.

Parameters:
  • cov_funcs (list of Covariance objects) – The covariance functions that compose the tensor (Kronecker) product. Defaults to [zero].
  • mean_func (None, instance of Mean) – The mean function. Defaults to zero.

Examples

# One dimensional column vectors of inputs
X1 = np.linspace(0, 1, 10)[:, None]
X2 = np.linspace(0, 2, 5)[:, None]
Xs = [X1, X2]
y = np.random.randn(len(X1)*len(X2))  # toy data
with pm.Model() as model:
    # Specify the covariance functions for each Xi
    cov_func1 = pm.gp.cov.ExpQuad(1, ls=0.1)  # Must accept X1 without error
    cov_func2 = pm.gp.cov.ExpQuad(1, ls=0.3)  # Must accept X2 without error

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.MarginalKron(cov_funcs=[cov_func1, cov_func2])

    # Place a GP prior over the function f.
    sigma = pm.HalfCauchy("sigma", beta=3)
    y_ = gp.marginal_likelihood("y", Xs=Xs, y=y, sigma=sigma)

    # ...

# After fitting or sampling, specify the distribution
# at new points with .conditional
# Xnew need not be on a full grid
Xnew1 = np.linspace(-1, 2, 10)[:, None]
Xnew2 = np.linspace(0, 3, 10)[:, None]
Xnew = np.concatenate((Xnew1, Xnew2), axis=1)  # Not full grid, works
Xnew = pm.math.cartesian(Xnew1, Xnew2)  # Full grid, also works

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)
conditional(name, Xnew, pred_noise=False, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew, just as in Marginal.

Xnew will be split by columns and fed to the relevant covariance functions based on their input_dim. For example, if cov_func1, cov_func2, and cov_func3 have input_dim of 2, 1, and 4, respectively, then Xnew must have 7 columns and a covariance between the prediction points

cov_func(Xnew) = cov_func1(Xnew[:, :2]) * cov_func1(Xnew[:, 2:3]) * cov_func1(Xnew[:, 3:])

This cov_func does not have a Kronecker structure without a full grid, but the conditional distribution does not have a Kronecker structure regardless. Thus, the conditional method must fall back to using MvNormal rather than KroneckerNormal in either case.

Parameters:
  • name (string) – Name of the random variable
  • Xnew (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • pred_noise (bool) – Whether or not observation noise is included in the conditional. Default is False.
  • **kwargs – Extra keyword arguments that are passed to MvNormal distribution constructor.
marginal_likelihood(name, Xs, y, sigma, is_observed=True, **kwargs)

Returns the marginal likelihood distribution, given the input locations cartesian(*Xs) and the data y.

Parameters:
  • name (string) – Name of the random variable
  • Xs (list of array-like) – Function input values for each covariance function. Each entry must be passable to its respective covariance without error. The total covariance function is measured on the full grid cartesian(*Xs).
  • y (array-like) – Data that is the sum of the function with the GP prior and Gaussian noise. Must have shape (n, ).
  • sigma (scalar, Variable) – Standard deviation of the white Gaussian noise.
  • is_observed (bool) – Whether to set y as an observed variable in the model. Default is True.
  • **kwargs – Extra keyword arguments that are passed to KroneckerNormal distribution constructor.
predict(Xnew, point=None, diag=False, pred_noise=False)

Return the mean vector and covariance matrix of the conditional distribution as numpy arrays, given a point, such as the MAP estimate or a sample from a trace.

Parameters:
  • Xnew (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • point (pymc3.model.Point) – A specific point to condition on.
  • diag (bool) – If True, return the diagonal instead of the full covariance matrix. Default is False.
  • pred_noise (bool) – Whether or not observation noise is included in the conditional. Default is False.
predictt(Xnew, diag=False, pred_noise=False)

Return the mean vector and covariance matrix of the conditional distribution as symbolic variables.

Parameters:
  • Xnew (array-like) – Function input values. If one-dimensional, must be a column vector with shape (n, 1).
  • diag (bool) – If True, return the diagonal instead of the full covariance matrix. Default is False.
  • pred_noise (bool) – Whether or not observation noise is included in the conditional. Default is False.

Mean Functions

Zero Zero mean function for Gaussian process.
Constant([c]) Constant mean function for Gaussian process.
Linear(coeffs[, intercept]) Linear mean function for Gaussian process.
class pymc3.gp.mean.Zero

Zero mean function for Gaussian process.

class pymc3.gp.mean.Constant(c=0)

Constant mean function for Gaussian process.

Parameters:c (variable, array or integer) – Constant mean value
class pymc3.gp.mean.Linear(coeffs, intercept=0)

Linear mean function for Gaussian process.

Parameters:
  • coeffs (variables) – Linear coefficients
  • intercept (variable, array or integer) – Intercept for linear function (Defaults to zero)

Covariance Functions

Constant(c) Constant valued covariance function.
WhiteNoise(sigma) White noise covariance function.
ExpQuad(input_dim[, ls, ls_inv, active_dims]) The Exponentiated Quadratic kernel.
RatQuad(input_dim, alpha[, ls, ls_inv, …]) The Rational Quadratic kernel.
Matern32(input_dim[, ls, ls_inv, active_dims]) The Matern kernel with nu = 3/2.
Matern52(input_dim[, ls, ls_inv, active_dims]) The Matern kernel with nu = 5/2.
Exponential(input_dim[, ls, ls_inv, active_dims]) The Exponential kernel.
Cosine(input_dim[, ls, ls_inv, active_dims]) The Cosine kernel.
Periodic(input_dim, period[, ls, ls_inv, …]) The Periodic kernel.
Linear(input_dim, c[, active_dims]) The Linear kernel.
Polynomial(input_dim, c, d, offset[, …]) The Polynomial kernel.
WarpedInput(input_dim, cov_func, warp_func) Warp the inputs of any kernel using an arbitrary function defined using Theano.
Gibbs(input_dim, lengthscale_func[, args, …]) The Gibbs kernel.
class pymc3.gp.cov.Constant(c)

Constant valued covariance function.

\[k(x, x') = c\]
class pymc3.gp.cov.WhiteNoise(sigma)

White noise covariance function.

\[k(x, x') = \sigma^2 \mathrm{I}\]
class pymc3.gp.cov.ExpQuad(input_dim, ls=None, ls_inv=None, active_dims=None)

The Exponentiated Quadratic kernel. Also refered to as the Squared Exponential, or Radial Basis Function kernel.

\[k(x, x') = \mathrm{exp}\left[ -\frac{(x - x')^2}{2 \ell^2} \right]\]
class pymc3.gp.cov.RatQuad(input_dim, alpha, ls=None, ls_inv=None, active_dims=None)

The Rational Quadratic kernel.

\[k(x, x') = \left(1 + \frac{(x - x')^2}{2\alpha\ell^2} \right)^{-\alpha}\]
class pymc3.gp.cov.Exponential(input_dim, ls=None, ls_inv=None, active_dims=None)

The Exponential kernel.

\[k(x, x') = \mathrm{exp}\left[ -\frac{||x - x'||}{2\ell^2} \right]\]
class pymc3.gp.cov.Matern52(input_dim, ls=None, ls_inv=None, active_dims=None)

The Matern kernel with nu = 5/2.

\[k(x, x') = \left(1 + \frac{\sqrt{5(x - x')^2}}{\ell} + \frac{5(x-x')^2}{3\ell^2}\right) \mathrm{exp}\left[ - \frac{\sqrt{5(x - x')^2}}{\ell} \right]\]
class pymc3.gp.cov.Matern32(input_dim, ls=None, ls_inv=None, active_dims=None)

The Matern kernel with nu = 3/2.

\[k(x, x') = \left(1 + \frac{\sqrt{3(x - x')^2}}{\ell}\right) \mathrm{exp}\left[ - \frac{\sqrt{3(x - x')^2}}{\ell} \right]\]
class pymc3.gp.cov.Linear(input_dim, c, active_dims=None)

The Linear kernel.

\[k(x, x') = (x - c)(x' - c)\]
class pymc3.gp.cov.Polynomial(input_dim, c, d, offset, active_dims=None)

The Polynomial kernel.

\[k(x, x') = [(x - c)(x' - c) + \mathrm{offset}]^{d}\]
class pymc3.gp.cov.Cosine(input_dim, ls=None, ls_inv=None, active_dims=None)

The Cosine kernel.

\[k(x, x') = \mathrm{cos}\left( \pi \frac{||x - x'||}{ \ell^2} \right)\]
class pymc3.gp.cov.Periodic(input_dim, period, ls=None, ls_inv=None, active_dims=None)

The Periodic kernel.

\[k(x, x') = \mathrm{exp}\left( -\frac{2 \mathrm{sin}^2(\pi |x-x'| \frac{1}{T})}{\ell^2} \right)\]
class pymc3.gp.cov.WarpedInput(input_dim, cov_func, warp_func, args=None, active_dims=None)

Warp the inputs of any kernel using an arbitrary function defined using Theano.

\[k(x, x') = k(w(x), w(x'))\]
Parameters:
  • cov_func (Covariance) –
  • warp_func (callable) – Theano function of X and additional optional arguments.
  • args (optional, tuple or list of scalars or PyMC3 variables) – Additional inputs (besides X or Xs) to warp_func.
class pymc3.gp.cov.Gibbs(input_dim, lengthscale_func, args=None, active_dims=None)

The Gibbs kernel. Use an arbitrary lengthscale function defined using Theano. Only tested in one dimension.

\[k(x, x') = \sqrt{\frac{2\ell(x)\ell(x')}{\ell^2(x) + \ell^2(x')}} \mathrm{exp}\left[ -\frac{(x - x')^2} {\ell^2(x) + \ell^2(x')} \right]\]
Parameters:
  • lengthscale_func (callable) – Theano function of X and additional optional arguments.
  • args (optional, tuple or list of scalars or PyMC3 variables) – Additional inputs (besides X or Xs) to lengthscale_func.
class pymc3.gp.cov.Coregion(input_dim, W=None, kappa=None, B=None, active_dims=None)

Covariance function for intrinsic/linear coregionalization models. Adapted from GPy http://gpy.readthedocs.io/en/deploy/GPy.kern.src.html#GPy.kern.src.coregionalize.Coregionalize.

This covariance has the form:

\[\mathbf{B} = \mathbf{W}\mathbf{W}^\top + \text{diag}(\kappa)\]

and calls must use integers associated with the index of the matrix. This allows the api to remain consistent with other covariance objects:

\[k(x, x') = \mathbf{B}[x, x'^\top]\]
Parameters:
  • W (2D array of shape (num_outputs, rank)) – a low rank matrix that determines the correlations between the different outputs (rows)
  • kappa (1D array of shape (num_outputs, )) – a vector which allows the outputs to behave independently
  • B (2D array of shape (num_outputs, rank)) – the total matrix, exactly one of (W, kappa) and B must be provided

Notes

Exactly one dimension must be active for this kernel. Thus, if input_dim != 1, then active_dims must have a length of one.