Stats

Statistical utility functions for PyMC

pymc3.stats.autocorr(pymc3_obj, *args, **kwargs)

Sample autocorrelation at specified lag.

Parameters:
  • x (Numpy array) – An array containing MCMC samples
  • lag (int) – The desidered lag to take in consideration
pymc3.stats.autocov(pymc3_obj, *args, **kwargs)

Sample autocovariance at specified lag.

Parameters:
  • x (Numpy array) – An array containing MCMC samples
  • lag (int) – The desidered lag to take in consideration
Returns:

  • 2x2 matrix with the variances of
  • x[ (-lag] and x[lag:] in the diagonal and the autocovariance)
  • on the off-diagonal.

pymc3.stats.dic(trace, model=None)

Calculate the deviance information criterion of the samples in trace from model Read more theory here - in a paper by some of the leading authorities on model selection - dx.doi.org/10.1111/1467-9868.00353

Parameters:
  • trace (result of MCMC run) –
  • model (PyMC Model) – Optional model. Default None, taken from context.
Returns:

z (float) – The deviance information criterion of the model and trace

pymc3.stats.bpic(trace, model=None)

Calculates Bayesian predictive information criterion n of the samples in trace from model Read more theory here - in a paper by some of the leading authorities on model selection - dx.doi.org/10.1080/01966324.2011.10737798

Parameters:
  • trace (result of MCMC run) –
  • model (PyMC Model) – Optional model. Default None, taken from context.
Returns:

z (float) – The Bayesian predictive information criterion of the model and trace

pymc3.stats.waic(trace, model=None, pointwise=False, progressbar=False)

Calculate the widely available information criterion, its standard error and the effective number of parameters of the samples in trace from model. Read more theory here - in a paper by some of the leading authorities on model selection - dx.doi.org/10.1111/1467-9868.00353

Parameters:
  • trace (result of MCMC run) –
  • model (PyMC Model) – Optional model. Default None, taken from context.
  • pointwise (bool) – if True the pointwise predictive accuracy will be returned. Default False
  • progressbar (bool) – Whether or not to display a progress bar in the command line. The bar shows the percentage of completion, the evaluation speed, and the estimated time to completion
Returns:

  • namedtuple with the following elements
  • waic (widely available information criterion)
  • waic_se (standard error of waic)
  • p_waic (effective number parameters)
  • waic_i (and array of the pointwise predictive accuracy, only if pointwise True)

pymc3.stats.loo(trace, model=None, pointwise=False, progressbar=False)

Calculates leave-one-out (LOO) cross-validation for out of sample predictive model fit, following Vehtari et al. (2015). Cross-validation is computed using Pareto-smoothed importance sampling (PSIS).

Parameters:
  • trace (result of MCMC run) –
  • model (PyMC Model) – Optional model. Default None, taken from context.
  • pointwise (bool) – if True the pointwise predictive accuracy will be returned. Default False
  • progressbar (bool) – Whether or not to display a progress bar in the command line. The bar shows the percentage of completion, the evaluation speed, and the estimated time to completion
Returns:

  • namedtuple with the following elements
  • loo (approximated Leave-one-out cross-validation)
  • loo_se (standard error of loo)
  • p_loo (effective number of parameters)
  • loo_i (and array of the pointwise predictive accuracy, only if pointwise True)

pymc3.stats.hpd(pymc3_obj, *args, **kwargs)

Calculate highest posterior density (HPD) of array for given alpha. The HPD is the minimum width Bayesian credible interval (BCI).

Arguments:
x : Numpy array

An array containing MCMC samples

alpha : float

Desired probability of type I error (defaults to 0.05)

transform : callable

Function to transform data (defaults to identity)

pymc3.stats.quantiles(pymc3_obj, *args, **kwargs)

Returns a dictionary of requested quantiles from array

Parameters:
  • x (Numpy array) – An array containing MCMC samples
  • qlist (tuple or list) – A list of desired quantiles (defaults to (2.5, 25, 50, 75, 97.5))
  • transform (callable) – Function to transform data (defaults to identity)
Returns:

`dictionary` with the quantiles {quantile (value})

pymc3.stats.mc_error(pymc3_obj, *args, **kwargs)
Calculates the simulation standard error, accounting for non-independent
samples. The trace is divided into batches, and the standard deviation of the batch means is calculated.
Parameters:
  • x (Numpy array) – An array containing MCMC samples
  • batches (integer) – Number of batches
Returns:

float representing the error

pymc3.stats.summary(trace, varnames=None, transform=<function <lambda>>, stat_funcs=None, extend=False, include_transformed=False, alpha=0.05, start=0, batches=None)

Create a data frame with summary statistics.

Parameters:
  • trace (MultiTrace instance) –
  • varnames (list) – Names of variables to include in summary
  • transform (callable) – Function to transform data (defaults to identity)
  • stat_funcs (None or list) –

    A list of functions used to calculate statistics. By default, the mean, standard deviation, simulation standard error, and highest posterior density intervals are included.

    The functions will be given one argument, the samples for a variable as a 2 dimensional array, where the first axis corresponds to sampling iterations and the second axis represents the flattened variable (e.g., x__0, x__1,…). Each function should return either

    1. A pandas.Series instance containing the result of calculating the statistic along the first axis. The name attribute will be taken as the name of the statistic.
    2. A pandas.DataFrame where each column contains the result of calculating the statistic along the first axis. The column names will be taken as the names of the statistics.
  • extend (boolean) – If True, use the statistics returned by stat_funcs in addition to, rather than in place of, the default statistics. This is only meaningful when stat_funcs is not None.
  • include_transformed (bool) – Flag for reporting automatically transformed variables in addition to original variables (defaults to False).
  • alpha (float) – The alpha level for generating posterior intervals. Defaults to 0.05. This is only meaningful when stat_funcs is None.
  • start (int) – The starting index from which to summarize (each) chain. Defaults to zero.
  • batches (None or int) – Batch size for calculating standard deviation for non-independent samples. Defaults to the smaller of 100 or the number of samples. This is only meaningful when stat_funcs is None.

See also

summary()
Generate a pretty-printed summary of a trace.
Returns:
  • pandas.DataFrame with summary statistics for each variable Defaults one
  • are (mean, sd, mc_error, hpd_2.5, hpd_97.5, n_eff and Rhat.)
  • Last two are only computed for traces with 2 or more chains.

Examples

>>> import pymc3 as pm
>>> trace.mu.shape
(1000, 2)
>>> pm.summary(trace, ['mu'])
           mean        sd  mc_error     hpd_5    hpd_95
mu__0  0.106897  0.066473  0.001818 -0.020612  0.231626
mu__1 -0.046597  0.067513  0.002048 -0.174753  0.081924

          n_eff      Rhat
mu__0     487.0   1.00001
mu__1     379.0   1.00203

Other statistics can be calculated by passing a list of functions.

>>> import pandas as pd
>>> def trace_sd(x):
...     return pd.Series(np.std(x, 0), name='sd')
...
>>> def trace_quantiles(x):
...     return pd.DataFrame(pm.quantiles(x, [5, 50, 95]))
...
>>> pm.summary(trace, ['mu'], stat_funcs=[trace_sd, trace_quantiles])
             sd         5        50        95
mu__0  0.066473  0.000312  0.105039  0.214242
mu__1  0.067513 -0.159097 -0.045637  0.062912
pymc3.stats.compare(traces, models, ic='WAIC', method='stacking', b_samples=1000, alpha=1, seed=None, round_to=2)

Compare models based on the widely available information criterion (WAIC) or leave-one-out (LOO) cross-validation. Read more theory here - in a paper by some of the leading authorities on model selection - dx.doi.org/10.1111/1467-9868.00353

Parameters:
  • traces (list of PyMC3 traces) –
  • models (list of PyMC3 models) – in the same order as traces.
  • ic (string) – Information Criterion (WAIC or LOO) used to compare models. Default WAIC.
  • method (str) –

    Method used to estimate the weights for each model. Available options are:

    • ’stacking’ : (default) stacking of predictive distributions.
    • ’BB-pseudo-BMA’ : pseudo-Bayesian Model averaging using Akaike-type
      weighting. The weights are stabilized using the Bayesian bootstrap
    • ’pseudo-BMA’: pseudo-Bayesian Model averaging using Akaike-type
      weighting, without Bootstrap stabilization (not recommended)

    For more information read https://arxiv.org/abs/1704.02030

  • b_samples (int) – Number of samples taken by the Bayesian bootstrap estimation. Only useful when method = ‘BB-pseudo-BMA’.
  • alpha (float) – The shape parameter in the Dirichlet distribution used for the Bayesian bootstrap. Only useful when method = ‘BB-pseudo-BMA’. When alpha=1 (default), the distribution is uniform on the simplex. A smaller alpha will keeps the final weights more away from 0 and 1.
  • seed (int or np.random.RandomState instance) – If int or RandomState, use it for seeding Bayesian bootstrap. Only useful when method = ‘BB-pseudo-BMA’. Default None the global np.random state is used.
  • round_to (int) – Number of decimals used to round results (default 2).
Returns:

  • A DataFrame, ordered from lowest to highest IC. The index reflects
  • the order in which the models are passed to this function. The columns are
  • IC (Information Criteria (WAIC or LOO).) – Smaller IC indicates higher out-of-sample predictive fit (“better” model). Default WAIC.
  • pIC (Estimated effective number of parameters.)
  • dIC (Relative difference between each IC (WAIC or LOO))
  • and the lowest IC (WAIC or LOO). – It’s always 0 for the top-ranked model.
  • weight (Relative weight for each model.) – This can be loosely interpreted as the probability of each model (among the compared model) given the data. By default the uncertainty in the weights estimation is considered using Bayesian bootstrap.
  • SE (Standard error of the IC estimate.) – If method = BB-pseudo-BMA these values are estimated using Bayesian bootstrap.
  • dSE (Standard error of the difference in IC between each model and)
  • the top-ranked model. – It’s always 0 for the top-ranked model.
  • warning (A value of 1 indicates that the computation of the IC may not be) – reliable see http://arxiv.org/abs/1507.04544 for details.

pymc3.stats.bfmi(trace)

Calculate the estimated Bayesian fraction of missing information (BFMI).

BFMI quantifies how well momentum resampling matches the marginal energy distribution. For more information on BFMI, see https://arxiv.org/pdf/1604.00695.pdf. The current advice is that values smaller than 0.2 indicate poor sampling. However, this threshold is provisional and may change. See http://mc-stan.org/users/documentation/case-studies/pystan_workflow.html for more information.

Parameters:trace (result of an HMC/NUTS run, must contain energy information) –
Returns:z (float) – The Bayesian fraction of missing information of the model and trace.
pymc3.stats.r2_score(y_true, y_pred, round_to=2)

R-squared for Bayesian regression models. Only valid for linear models. http://www.stat.columbia.edu/%7Egelman/research/unpublished/bayes_R2.pdf

Parameters:
  • y_true (: array-like of shape = (n_samples) or (n_samples, n_outputs)) – Ground truth (correct) target values.
  • y_pred (array-like of shape = (n_samples) or (n_samples, n_outputs)) – Estimated target values.
  • round_to (int) – Number of decimals used to round results (default 2).
Returns:

  • namedtuple with the following elements
  • R2_median (median of the Bayesian R2)
  • R2_mean (mean of the Bayesian R2)
  • R2_std (standard deviation of the Bayesian R2)