Getting started with PyMC3¶
Authors: John Salvatier, Thomas V. Wiecki, Christopher Fonnesbeck
Note: This text is based on the PeerJ CS publication on PyMC3.
Abstract¶
Probabilistic Programming allows for automatic Bayesian inference on userdefined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamiltonian Monte Carlo (HMC), requires gradient information which is often not readily available. PyMC3 is a new open source Probabilistic Programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs onthefly to C for increased speed. Contrary to other Probabilistic Programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorialstyle introduction to this software package.
Introduction¶
Probabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, opensource PP framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features nextgeneration Markov chain Monte Carlo (MCMC) sampling algorithms such as the NoUTurn Sampler (NUTS; Hoffman, 2014), a selftuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several selftuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, which means you usually don’t need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.
Probabilistic programming in Python confers a number of advantages including multiplatform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis.
While most of PyMC3’s userfacing features are written in pure Python, it leverages Theano (Bergstra et al., 2010) to transparently transcode models to C and compile them to machine code, thereby boosting performance. Theano is a library that allows expressions to be defined using generalized vector data structures called tensors, which are tightly integrated with the popular NumPy ndarray
data structure, and similarly allow for broadcasting and advanced indexing, just as NumPy arrays do.
Theano also automatically optimizes the likelihood’s computational graph for speed and provides simple GPU integration.
Here, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first see the basics of how to use PyMC3, motivated by a simple example: installation, data creation, model definition, model fitting and posterior analysis. Then we will cover two case studies and use them to show how to define and fit more sophisticated models. Finally we will show how to extend PyMC3 and discuss other useful features: the Generalized Linear Models subpackage, custom distributions, custom transformations and alternative storage backends.
Installation¶
Note: These instructions are out of date, and no longer correct. Please see the Installation instructions on the GitHub site for PyMC3. 
Running PyMC3 requires a working Python interpreter, either version 2.7 (or more recent) or 3.5 (or more recent); we recommend that new users install version 3.5. A complete Python installation for Mac OSX, Linux and Windows can most easily be obtained by downloading and installing the free `Anaconda Python Distribution
<https://store.continuum.io/cshop/anaconda/>`__ by ContinuumIO.
PyMC3
can be installed using “pip”:
pip install pymc3
Or via conda:
conda install pymc3
The current development branch of PyMC3 can be installed from GitHub, also using pip:
pip install git+https://github.com/pymcdevs/pymc3
The source code for PyMC3 is hosted on GitHub at https://github.com/pymcdevs/pymc3 and is distributed under the liberal Apache License 2.0. On the GitHub site, users may also report bugs and other issues, as well as contribute documentation or code to the project, which we actively encourage.
A Motivating Example: Linear Regression¶
To introduce model definition, fitting and posterior analysis, we first consider a simple Bayesian linear regression model with normal priors for the parameters. We are interested in predicting outcomes \(Y\) as normallydistributed observations with an expected value \(\mu\) that is a linear function of two predictor variables, \(X_1\) and \(X_2\).
where \(\alpha\) is the intercept, and \(\beta_i\) is the coefficient for covariate \(X_i\), while \(\sigma\) represents the observation error. Since we are constructing a Bayesian model, we must assign a prior distribution to the unknown variables in the model. We choose zeromean normal priors with variance of 100 for both regression coefficients, which corresponds to weak information regarding the true parameter values. We choose a halfnormal distribution (normal distribution bounded at zero) as the prior for \(\sigma\).
Generating data¶
We can simulate some artificial data from this model using only NumPy’s random
module, and then use PyMC3 to try to recover the corresponding parameters. We are intentionally generating the data to closely correspond the PyMC3 model structure.
[1]:
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
[2]:
%config InlineBackend.figure_format = 'retina'
# Initialize random number generator
RANDOM_SEED = 8927
np.random.seed(RANDOM_SEED)
az.style.use("arvizdarkgrid")
[3]:
# True parameter values
alpha, sigma = 1, 1
beta = [1, 2.5]
# Size of dataset
size = 100
# Predictor variable
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
# Simulate outcome variable
Y = alpha + beta[0] * X1 + beta[1] * X2 + np.random.randn(size) * sigma
Here is what the simulated data look like. We use the pylab
module from the plotting library matplotlib.
[4]:
fig, axes = plt.subplots(1, 2, sharex=True, figsize=(10, 4))
axes[0].scatter(X1, Y, alpha=0.6)
axes[1].scatter(X2, Y, alpha=0.6)
axes[0].set_ylabel("Y")
axes[0].set_xlabel("X1")
axes[1].set_xlabel("X2");
Model Specification¶
Specifying this model in PyMC3 is straightforward because the syntax is as close to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above.
First, we import PyMC. We use the convention of importing it as pm
.
[5]:
import pymc3 as pm
print(f"Running on PyMC3 v{pm.__version__}")
Running on PyMC3 v3.11.0
Now we build our model, which we will present in full first, then explain each part linebyline.
[6]:
basic_model = pm.Model()
with basic_model:
# Priors for unknown model parameters
alpha = pm.Normal("alpha", mu=0, sigma=10)
beta = pm.Normal("beta", mu=0, sigma=10, shape=2)
sigma = pm.HalfNormal("sigma", sigma=1)
# Expected value of outcome
mu = alpha + beta[0] * X1 + beta[1] * X2
# Likelihood (sampling distribution) of observations
Y_obs = pm.Normal("Y_obs", mu=mu, sigma=sigma, observed=Y)
The first line,
basic_model = Model()
creates a new Model
object which is a container for the model random variables.
Following instantiation of the model, the subsequent specification of the model components is performed inside a with
statement:
with basic_model:
This creates a context manager, with our basic_model
as the context, that includes all statements until the indented block ends. This means all PyMC3 objects introduced in the indented code block below the with
statement are added to the model behind the scenes. Absent this context manager idiom, we would be forced to manually associate each of the variables with basic_model
right after we create them. If you try to create a new random variable without a with model:
statement,
it will raise an error since there is no obvious model for the variable to be added to.
The first three statements in the context manager:
alpha = Normal('alpha', mu=0, sigma=10)
beta = Normal('beta', mu=0, sigma=10, shape=2)
sigma = HalfNormal('sigma', sigma=1)
create stochastic random variables with Normal prior distributions for the regression coefficients with a mean of 0 and standard deviation of 10, and a halfnormal distribution for the standard deviation of the observations, \(\sigma\). These are stochastic because their values are partly determined by its parents in the dependency graph of random variables, which for priors are simple constants, and partly random (or stochastic).
We call the Normal
constructor to create a random variable to use as a normal prior. The first argument is always the name of the random variable, which should almost always match the name of the Python variable being assigned to, since it is sometimes used to retrieve the variable from the model for summarizing output. The remaining required arguments for a stochastic object are the parameters, in this case mu
, the mean, and sd
, the standard deviation, which we assign
hyperparameter values for the model. In general, a distribution’s parameters are values that determine the location, shape or scale of the random variable, depending on the parameterization of the distribution. Most commonly used distributions, such as Beta
, Exponential
, Categorical
, Gamma
, Binomial
and many others, are available in PyMC3.
The beta
variable has an additional shape
argument to denote it as a vectorvalued parameter of size 2. The shape
argument is available for all distributions and specifies the length or shape of the random variable, but is optional for scalar variables, since it defaults to a value of one. It can be an integer, to specify an array, or a tuple, to specify a multidimensional array (e.g. shape=(5,7)
makes random variable that takes on 5 by 7 matrix values).
Detailed notes about distributions, sampling methods and other PyMC3 functions are available in the API documentation.
Having defined the priors, the next statement creates the expected value mu
of the outcomes, specifying the linear relationship:
mu = alpha + beta[0]*X1 + beta[1]*X2
This creates a deterministic random variable, which implies that its value is completely determined by its parents’ values. That is, there is no uncertainty beyond that which is inherent in the parents’ values. Here, mu
is just the sum of the intercept alpha
and the two products of the coefficients in beta
and the predictor variables, whatever their values may be.
PyMC3 random variables and data can be arbitrarily added, subtracted, divided, multiplied together and indexedinto to create new random variables. This allows for great model expressivity. Many common mathematical functions like sum
, sin
, exp
and linear algebra functions like dot
(for inner product) and inv
(for inverse) are also provided.
The final line of the model, defines Y_obs
, the sampling distribution of the outcomes in the dataset.
Y_obs = Normal('Y_obs', mu=mu, sigma=sigma, observed=Y)
This is a special case of a stochastic variable that we call an observed stochastic, and represents the data likelihood of the model. It is identical to a standard stochastic, except that its observed
argument, which passes the data to the variable, indicates that the values for this variable were observed, and should not be changed by any fitting algorithm applied to the model. The data can be passed in the form of either a numpy.ndarray
or pandas.DataFrame
object.
Notice that, unlike for the priors of the model, the parameters for the normal distribution of Y_obs
are not fixed values, but rather are the deterministic object mu
and the stochastic sigma
. This creates parentchild relationships between the likelihood and these two variables.
Model fitting¶
Having completely specified our model, the next step is to obtain posterior estimates for the unknown variables in the model. Ideally, we could calculate the posterior estimates analytically, but for most nontrivial models, this is not feasible. We will consider two approaches, whose appropriateness depends on the structure of the model and the goals of the analysis: finding the maximum a posteriori (MAP) point using optimization methods, and computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods.
Maximum a posteriori methods¶
The maximum a posteriori (MAP) estimate for a model, is the mode of the posterior distribution and is generally found using numerical optimization methods. This is often fast and easy to do, but only gives a point estimate for the parameters and can be biased if the mode isn’t representative of the distribution. PyMC3 provides this functionality with the find_MAP
function.
Below we find the MAP for our original model. The MAP is returned as a parameter point, which is always represented by a Python dictionary of variable names to NumPy arrays of parameter values.
[7]:
map_estimate = pm.find_MAP(model=basic_model)
map_estimate
[7]:
{'alpha': array(0.95724679),
'beta': array([1.10071814, 2.9511438 ]),
'sigma_log__': array(0.03540151),
'sigma': array(1.0360356)}
By default, find_MAP
uses the Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm to find the maximum of the logposterior but also allows selection of other optimization algorithms from the scipy.optimize
module. For example, below we use Powell’s method to find the MAP.
[8]:
map_estimate = pm.find_MAP(model=basic_model, method="powell")
map_estimate
/Users/CloudChaoszero/opt/anaconda3/envs/pymc3devpy38/lib/python3.8/sitepackages/scipy/optimize/_minimize.py:519: RuntimeWarning: Method powell does not use gradient information (jac).
warn('Method %s does not use gradient information (jac).' % method,
[8]:
{'alpha': array(0.95835519),
'beta': array([1.1017629 , 2.95394057]),
'sigma_log__': array(0.03638187),
'sigma': array(1.03705179)}
It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together.
Most techniques for finding the MAP estimate also only find a local optimum (which is often good enough), but can fail badly for multimodal posteriors if the different modes are meaningfully different.
In summary, while PyMC3 provides the function find_MAP()
, at this point mostly for historical reasons, this function is of little use in most scenarios. If you want a point estimate you should get it from the posterior. In the next section we will see how to get a posterior using sampling methods.
Sampling methods¶
Though finding the MAP is a fast and easy way of obtaining estimates of the unknown model parameters, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulationbased approach such as Markov chain Monte Carlo (MCMC) can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the true posterior distribution.
To conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a particular MCMC algorithm, such as Metropolis, Slice sampling, or the NoUTurn Sampler (NUTS). PyMC3’s step_methods
submodule contains the following samplers: NUTS
, Metropolis
, Slice
, HamiltonianMC
, and BinaryMetropolis
. These step methods can be assigned manually, or assigned automatically by PyMC3. Autoassignment is based on the attributes of each
variable in the model. In general:
Binary variables will be assigned to
BinaryMetropolis
Discrete variables will be assigned to
Metropolis
Continuous variables will be assigned to
NUTS
Autoassignment can be overriden for any subset of variables by specifying them manually prior to sampling.
Gradientbased sampling methods¶
PyMC3 has the standard sampling algorithms like adaptive MetropolisHastings and adaptive slice sampling, but PyMC3’s most capable step method is the NoUTurn Sampler. NUTS is especially useful on models that have many continuous parameters, a situation where other MCMC algorithms work very slowly. It takes advantage of information about where regions of higher probability are, based on the gradient of the log posteriordensity. This helps it achieve dramatically faster convergence on large problems than traditional sampling methods achieve. PyMC3 relies on Theano to analytically compute model gradients via automatic differentiation of the posterior density. NUTS also has several selftuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo. For random variables that are undifferentiable (namely, discrete variables) NUTS cannot be used, but it may still be used on the differentiable variables in a model that contains undifferentiable variables.
NUTS requires a scaling matrix parameter, which is analogous to the variance parameter for the jump proposal distribution in MetropolisHastings, although NUTS uses it somewhat differently. The matrix gives the rough shape of the distribution so that NUTS does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly nonnormal posterior distributions. Poor scaling parameters will slow down NUTS significantly, sometimes almost stopping it completely. A reasonable starting point for sampling can also be important for efficient sampling, but not as often.
PyMC3
automatically initializes NUTS to reasonable values based on the variance of the samples obtained during a tuning phase. A little bit of noise is added to ensure different, parallel, chains start from different points. Also, PyMC3
will automatically assign an appropriate sampler if we don’t supply it via the step
keyword argument (see below for an example of how to explicitly assign step methods).
[9]:
with basic_model:
# draw 500 posterior samples
trace = pm.sample(500, return_inferencedata=False)
Autoassigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, beta, alpha]
Sampling 2 chains for 1_000 tune and 500 draw iterations (2_000 + 1_000 draws total) took 21 seconds.
The sample
function runs the step method(s) assigned (or passed) to it for the given number of iterations and returns a Trace
object containing the samples collected, in the order they were collected. The trace
object can be queried in a similar way to a dict
containing a map from variable names to numpy.array
s. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha
variable as follows:
[10]:
trace["alpha"][5:]
[10]:
array([0.92353953, 0.85614491, 1.03088924, 1.02631406, 0.92231234])
If we wanted to use the slice sampling algorithm to sigma
instead of NUTS (which was assigned automatically), we could have specified this as the step
argument for sample
.
[11]:
with basic_model:
# instantiate sampler
step = pm.Slice()
# draw 5000 posterior samples
trace = pm.sample(5000, step=step, return_inferencedata=False)
Multiprocess sampling (2 chains in 2 jobs)
CompoundStep
>Slice: [sigma]
>Slice: [beta]
>Slice: [alpha]
Sampling 2 chains for 1_000 tune and 5_000 draw iterations (2_000 + 10_000 draws total) took 40 seconds.
Posterior analysis¶
PyMC3
’s plotting and diagnostics functionalities are now taken care of by a dedicated, platformagnostic package named ArviZ. A simple posterior plot can be created using plot_trace
.
[12]:
with basic_model:
az.plot_trace(trace);
The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta
variable, being vectorvalued, produces two histograms and two sample traces, corresponding to both predictor coefficients.
In addition, the summary
function provides a textbased output of common posterior statistics:
[13]:
with basic_model:
display(az.summary(trace, round_to=2))
mean  sd  hdi_3%  hdi_97%  mcse_mean  mcse_sd  ess_mean  ess_sd  ess_bulk  ess_tail  r_hat  

alpha  0.96  0.11  0.75  1.16  0.00  0.0  9813.79  9781.97  9816.05  6783.11  1.0 
beta[0]  1.10  0.12  0.89  1.33  0.00  0.0  8841.92  8797.67  8856.21  7109.65  1.0 
beta[1]  2.99  0.53  1.95  3.95  0.01  0.0  7878.01  7765.26  7880.25  6515.70  1.0 
sigma  1.07  0.08  0.92  1.21  0.00  0.0  8651.16  8475.93  8901.69  6633.66  1.0 
Case study 1: Stochastic volatility¶
We present a case study of stochastic volatility, time varying stock market volatility, to illustrate PyMC3’s use in addressing a more realistic problem. The distribution of market returns is highly nonnormal, which makes sampling the volatilities significantly more difficult. This example has 400+ parameters so using common sampling algorithms like MetropolisHastings would get bogged down, generating highly autocorrelated samples. Instead, we use NUTS, which is dramatically more efficient.
The Model¶
Asset prices have timevarying volatility (variance of day over day returns
). In some periods, returns are highly variable, while in others they are very stable. Stochastic volatility models address this with a latent volatility variable, which changes over time. The following model is similar to the one described in the NUTS paper (Hoffman 2014, p. 21).
Here, \(r\) is the daily return series which is modeled with a Studentt distribution with an unknown degrees of freedom parameter, and a scale parameter determined by a latent process \(s\). The individual \(s_i\) are the individual daily log volatilities in the latent log volatility process.
The Data¶
Our data consist of daily returns of the S&P 500 stock market index since the 2008 financial crisis:
[14]:
import pandas as pd
returns = pd.read_csv(
pm.get_data("SP500.csv"), parse_dates=True, index_col=0, usecols=["Date", "change"]
)
len(returns)
[14]:
2906
[15]:
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=UserWarning)
returns.plot(figsize=(10, 6))
plt.ylabel("daily returns in %");
Model Specification¶
As with the linear regression example, specifying the model in PyMC3 mirrors its statistical specification. This model employs several new distributions: the Exponential
distribution for the \(\nu\) and \(\sigma\) priors, the StudentT (StudentT
) distribution for distribution of returns, and the GaussianRandomWalk
for the prior for the latent volatilities.
In PyMC3, variables with purely positive priors like Exponential
are transformed with a log transform. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named “variableName_log”) is added to the model for sampling. In this model this happens behind the scenes for both the degrees of freedom, nu
, and the scale parameter for the volatility process, sigma
, since they both have exponential priors. Variables with priors that constrain them on two
sides, like Beta
or Uniform
, are also transformed to be unconstrained but with a log odds transform.
Although, unlike model specification in PyMC2, we do not typically provide starting points for variables at the model specification stage, we can also provide an initial value for any distribution (called a “test value”) using the testval
argument. This overrides the default test value for the distribution (usually the mean, median or mode of the distribution), and is most often useful if some values are illegal and we want to ensure we select a legal one. The test values for the
distributions are also used as a starting point for sampling and optimization by default, though this is easily overriden.
The vector of latent volatilities s
is given a prior distribution by GaussianRandomWalk
. As its name suggests GaussianRandomWalk is a vector valued distribution where the values of the vector form a random normal walk of length n, as specified by the shape
argument. The scale of the innovations of the random walk, sigma
, is specified in terms of the standard deviation of the normally distributed innovations and can be a scalar or vector.
We’ll also wrap our returns
in PyMC’s ``Data` container <https://docs.pymc.io/notebooks/data_container.html>`__. That way, when building our model, we can specify the dimension names instead of specifying the shapes of those random variables as numbers. And we will let the model infer the coordinates of those random variables. This will make more sense when you look at the model, but we encourage you to take a look at the ArviZ
quickstart. It defines dimensions and coordinates more clearly and explains their big benefits.
Let’s get started on our model now:
[16]:
with pm.Model() as sp500_model:
# The model remembers the datetime index with the name 'date'
change_returns = pm.Data("returns", returns["change"], dims="date", export_index_as_coords=True)
nu = pm.Exponential("nu", 1 / 10.0, testval=5.0)
sigma = pm.Exponential("sigma", 2.0, testval=0.1)
# We can now figure out the shape of variables based on the
# index of the dataset
s = pm.GaussianRandomWalk("s", sigma=sigma, dims="date")
# instead of:
# s = pm.GaussianRandomWalk('s', sigma, shape=len(returns))
volatility_process = pm.Deterministic(
"volatility_process", pm.math.exp(2 * s) ** 0.5, dims="date"
)
r = pm.StudentT("r", nu=nu, sigma=volatility_process, observed=change_returns, dims="date")
And we see that the model did remember the dims and coords we gave it:
[17]:
sp500_model.RV_dims
[17]:
{'returns': ('date',),
's': ('date',),
'volatility_process': ('date',),
'r': ('date',)}
[18]:
sp500_model.coords
[18]:
{'date': DatetimeIndex(['20080502', '20080505', '20080506', '20080507',
'20080508', '20080509', '20080512', '20080513',
'20080514', '20080515',
...
'20191101', '20191104', '20191105', '20191106',
'20191107', '20191108', '20191111', '20191112',
'20191113', '20191114'],
dtype='datetime64[ns]', name='Date', length=2906, freq=None)}
Notice that we transform the log volatility process s
into the volatility process by exp(2*s)
. Here, exp
is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.
Also note that we have declared the Model
name sp500_model
in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example.
Fitting¶
[19]:
with sp500_model:
trace = pm.sample(2000, init="adapt_diag", return_inferencedata=False)
Autoassigning NUTS sampler...
Initializing NUTS using adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [s, sigma, nu]
Sampling 2 chains for 1_000 tune and 2_000 draw iterations (2_000 + 4_000 draws total) took 362 seconds.
0, dim: date, 2906 =? 2906
The rhat statistic is larger than 1.05 for some parameters. This indicates slight problems during sampling.
The estimated number of effective samples is smaller than 200 for some parameters.
We can check our samples by looking at the traceplot for nu
and sigma
.
[20]:
with sp500_model:
az.plot_trace(trace, var_names=["nu", "sigma"]);
0, dim: date, 2906 =? 2906
0, dim: date, 2906 =? 2906
Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha
argument in Matplotlib’s plot
function) so the regions where many paths overlap are shaded more darkly.
[21]:
fig, ax = plt.subplots(figsize=(15, 8))
returns.plot(ax=ax)
ax.plot(returns.index, 1 / np.exp(trace["s", ::5].T), "C3", alpha=0.03)
ax.set(title="volatility_process", xlabel="time", ylabel="volatility")
ax.legend(["S&P500", "stochastic volatility process"], loc="upper right");
/Users/CloudChaoszero/opt/anaconda3/envs/pymc3devpy38/lib/python3.8/sitepackages/pandas/plotting/_matplotlib/tools.py:30: UserWarning: This figure was using constrained_layout==True, but that is incompatible with subplots_adjust and or tight_layout: setting constrained_layout==False.
fig.subplots_adjust(bottom=0.2)
As you can see, the model correctly infers the increase in volatility during the 2008 financial crash. Moreover, note that this model is quite complex because of its high dimensionality and dependencystructure in the random walk distribution. NUTS as implemented in PyMC3, however, correctly infers the posterior distribution with ease.
Case study 2: Coal mining disasters¶
Consider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period. Unfortunately, we also have pair of years with missing data, identified as missing by a nan
in the pandas Series
. These missing values will be automatically imputed by PyMC3
.
Next we will build a model for this series and attempt to estimate when the change occurred. At the same time, we will see how to handle missing data, use multiple samplers and sample from discrete random variables.
[22]:
import pandas as pd
# fmt: off
disaster_data = pd.Series(
[4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, np.nan, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, np.nan, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]
)
# fmt: on
years = np.arange(1851, 1962)
plt.plot(years, disaster_data, "o", markersize=8, alpha=0.4)
plt.ylabel("Disaster count")
plt.xlabel("Year");
Occurrences of disasters in the time series is thought to follow a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations.
In our model,
the parameters are defined as follows: * \(D_t\): The number of disasters in year \(t\) * \(r_t\): The rate parameter of the Poisson distribution of disasters in year \(t\). * \(s\): The year in which the rate parameter changes (the switchpoint). * \(e\): The rate parameter before the switchpoint \(s\). * \(l\): The rate parameter after the switchpoint \(s\). * \(t_l\), \(t_h\): The lower and upper boundaries of year \(t\).
This model is built much like our previous models. The major differences are the introduction of discrete variables with the Poisson and discreteuniform priors and the novel form of the deterministic random variable rate
.
[23]:
with pm.Model() as disaster_model:
switchpoint = pm.DiscreteUniform(
"switchpoint", lower=years.min(), upper=years.max(), testval=1900
)
# Priors for pre and postswitch rates number of disasters
early_rate = pm.Exponential("early_rate", 1.0)
late_rate = pm.Exponential("late_rate", 1.0)
# Allocate appropriate Poisson rates to years before and after current
rate = pm.math.switch(switchpoint >= years, early_rate, late_rate)
disasters = pm.Poisson("disasters", rate, observed=disaster_data)
/Users/CloudChaoszero/Documents/ProjectsDev/pymc3/pymc3/model.py:1754: ImputationWarning: Data in disasters contains missing values and will be automatically imputed from the sampling distribution.
warnings.warn(impute_message, ImputationWarning)
The logic for the rate random variable,
rate = switch(switchpoint >= year, early_rate, late_rate)
is implemented using switch
, a Theano function that works like an if statement. It uses the first argument to switch between the next two arguments.
Missing values are handled transparently by passing a MaskedArray
or a pandas.DataFrame
with NaN values to the observed
argument when creating an observed stochastic random variable. Behind the scenes, another random variable, disasters.missing_values
is created to model the missing values.
Unfortunately because they are discrete variables and thus have no meaningful gradient, we cannot use NUTS for sampling switchpoint
or the missing disaster observations. Instead, we will sample using a Metroplis
step method, which implements adaptive MetropolisHastings, because it is designed to handle discrete values. PyMC3
automatically assigns the correct sampling algorithms.
[24]:
with disaster_model:
trace = pm.sample(10000, return_inferencedata=False)
Multiprocess sampling (2 chains in 2 jobs)
CompoundStep
>CompoundStep
>>Metropolis: [disasters_missing]
>>Metropolis: [switchpoint]
>NUTS: [late_rate, early_rate]
Sampling 2 chains for 1_000 tune and 10_000 draw iterations (2_000 + 20_000 draws total) took 34 seconds.
The number of effective samples is smaller than 10% for some parameters.
In the trace plot below we can see that there’s about a 10 year span that’s plausible for a significant change in safety, but a 5 year span that contains most of the probability mass. The distribution is jagged because of the jumpy relationship between the year switchpoint and the likelihood and not due to sampling error.
[25]:
with disaster_model:
idata = az.from_pymc3(trace)
[26]:
idata
[26]:

 chain: 2
 disasters_missing_dim_0: 2
 draw: 10000
 chain(chain)int640 1
array([0, 1])
 draw(draw)int640 1 2 3 4 ... 9996 9997 9998 9999
array([ 0, 1, 2, ..., 9997, 9998, 9999])
 disasters_missing_dim_0(disasters_missing_dim_0)int640 1
array([0, 1])
 switchpoint(chain, draw)int641891 1891 1891 ... 1893 1892 1891
array([[1891, 1891, 1891, ..., 1889, 1889, 1889], [1892, 1886, 1889, ..., 1893, 1892, 1891]])
 disasters_missing(chain, draw, disasters_missing_dim_0)int647 0 6 0 5 1 5 1 ... 5 0 5 1 5 2 3 0
array([[[7, 0], [6, 0], [5, 1], ..., [1, 0], [2, 1], [0, 1]], [[3, 1], [3, 1], [2, 0], ..., [5, 1], [5, 2], [3, 0]]])
 early_rate(chain, draw)float643.025 3.076 3.565 ... 3.307 3.005
array([[3.02490339, 3.07599078, 3.56514373, ..., 2.61447178, 2.61447178, 3.44918562], [3.27857005, 3.03935203, 3.30460144, ..., 3.16269593, 3.30703433, 3.00495012]])
 late_rate(chain, draw)float640.877 0.8663 ... 0.802 0.9272
array([[0.8769546 , 0.86634116, 0.95610952, ..., 0.826213 , 0.826213 , 0.91611314], [0.9603648 , 1.01766597, 0.92037428, ..., 0.82312568, 0.80196379, 0.92723499]])
 created_at :
 20210208T06:29:28.922616
 arviz_version :
 0.11.0
 inference_library :
 pymc3
 inference_library_version :
 3.11.0
 sampling_time :
 33.77551817893982
 tuning_steps :
 1000
<xarray.Dataset> Dimensions: (chain: 2, disasters_missing_dim_0: 2, draw: 10000) Coordinates: * chain (chain) int64 0 1 * draw (draw) int64 0 1 2 3 4 ... 9995 9996 9997 9998 9999 * disasters_missing_dim_0 (disasters_missing_dim_0) int64 0 1 Data variables: switchpoint (chain, draw) int64 1891 1891 1891 ... 1892 1891 disasters_missing (chain, draw, disasters_missing_dim_0) int64 7 .... early_rate (chain, draw) float64 3.025 3.076 ... 3.307 3.005 late_rate (chain, draw) float64 0.877 0.8663 ... 0.802 0.9272 Attributes: created_at: 20210208T06:29:28.922616 arviz_version: 0.11.0 inference_library: pymc3 inference_library_version: 3.11.0 sampling_time: 33.77551817893982 tuning_steps: 1000
xarray.Dataset 
 chain: 2
 disasters_dim_0: 111
 draw: 10000
 chain(chain)int640 1
array([0, 1])
 draw(draw)int640 1 2 3 4 ... 9996 9997 9998 9999
array([ 0, 1, 2, ..., 9997, 9998, 9999])
 disasters_dim_0(disasters_dim_0)int640 1 2 3 4 5 ... 106 107 108 109 110
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110])
 disasters(chain, draw, disasters_dim_0)float641.775 2.278 ... 0.9272 1.003
array([[[1.77544061, 2.27799936, 1.77544061, ..., 1.00825466, 0.8769546 , 1.00825466], [1.75953639, 2.24534725, 1.75953639, ..., 1.00981766, 0.86634116, 1.00981766], [1.65838008, 1.99661362, 1.65838008, ..., 1.00099233, 0.95610952, 1.00099233], ..., [1.94827729, 2.59665312, 1.94827729, ..., 1.01711567, 0.826213 , 1.01711567], [1.94827729, 2.59665312, 1.94827729, ..., 1.01711567, 0.826213 , 1.01711567], [1.67468685, 2.04598661, 1.67468685, ..., 1.00372855, 0.91611314, 1.00372855]], [[1.70699441, 2.12902496, 1.70699441, ..., 1.00080687, 0.9603648 , 1.00080687], [1.77082848, 2.26862205, 1.77082848, ..., 1.00015423, 1.01766597, 1.00015423], [1.70139178, 2.11551382, 1.70139178, ..., 1.00334915, 0.92037428, 1.00334915], ..., [1.73505054, 2.19306364, 1.73505054, ..., 1.01777206, 0.82312568, 1.01777206], [1.7008809 , 2.114267 , 1.7008809 , ..., 1.02265561, 0.80196379, 1.02265561], [1.78196007, 2.29113702, 1.78196007, ..., 1.00278324, 0.92723499, 1.00278324]]])
 created_at :
 20210208T06:29:30.682570
 arviz_version :
 0.11.0
 inference_library :
 pymc3
 inference_library_version :
 3.11.0
<xarray.Dataset> Dimensions: (chain: 2, disasters_dim_0: 111, draw: 10000) Coordinates: * chain (chain) int64 0 1 * draw (draw) int64 0 1 2 3 4 5 ... 9994 9995 9996 9997 9998 9999 * disasters_dim_0 (disasters_dim_0) int64 0 1 2 3 4 5 ... 106 107 108 109 110 Data variables: disasters (chain, draw, disasters_dim_0) float64 1.775 ... 1.003 Attributes: created_at: 20210208T06:29:30.682570 arviz_version: 0.11.0 inference_library: pymc3 inference_library_version: 3.11.0
xarray.Dataset 
 accept_dim_0: 2
 accepted_dim_0: 2
 chain: 2
 draw: 10000
 scaling_dim_0: 2
 chain(chain)int640 1
array([0, 1])
 draw(draw)int640 1 2 3 4 ... 9996 9997 9998 9999
array([ 0, 1, 2, ..., 9997, 9998, 9999])
 accepted_dim_0(accepted_dim_0)int640 1
array([0, 1])
 scaling_dim_0(scaling_dim_0)int640 1
array([0, 1])
 accept_dim_0(accept_dim_0)int640 1
array([0, 1])
 perf_counter_start(chain, draw)float64481.6 481.6 481.6 ... 498.6 498.6
array([[481.62788461, 481.6291168 , 481.62996704, ..., 495.36697623, 495.36820486, 495.36928442], [484.98033663, 484.98164721, 484.98357028, ..., 498.62839408, 498.62960144, 498.63077901]])
 accepted(chain, draw, accepted_dim_0)boolFalse True True ... True True True
array([[[False, True], [ True, False], [ True, False], ..., [False, False], [ True, False], [ True, False]], [[False, True], [False, True], [ True, True], ..., [ True, True], [ True, True], [ True, True]]])
 diverging(chain, draw)boolFalse False False ... False False
array([[False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False]])
 step_size(chain, draw)float640.8683 0.8683 ... 1.078 1.078
array([[0.86829042, 0.86829042, 0.86829042, ..., 0.86829042, 0.86829042, 0.86829042], [1.0782656 , 1.0782656 , 1.0782656 , ..., 1.0782656 , 1.0782656 , 1.0782656 ]])
 tree_size(chain, draw)float643.0 1.0 3.0 3.0 ... 1.0 3.0 3.0 3.0
array([[3., 1., 3., ..., 1., 1., 3.], [3., 3., 3., ..., 3., 3., 3.]])
 step_size_bar(chain, draw)float641.135 1.135 1.135 ... 1.112 1.112
array([[1.13544036, 1.13544036, 1.13544036, ..., 1.13544036, 1.13544036, 1.13544036], [1.11179432, 1.11179432, 1.11179432, ..., 1.11179432, 1.11179432, 1.11179432]])
 energy(chain, draw)float64180.8 177.0 178.7 ... 179.2 176.4
array([[180.84583537, 177.02264999, 178.74471241, ..., 177.91502384, 179.90576987, 177.29210912], [177.80556826, 177.9962228 , 178.65102567, ..., 179.71597262, 179.20684298, 176.38309815]])
 depth(chain, draw)int642 1 2 2 1 2 2 2 ... 2 1 2 2 1 2 2 2
array([[2, 1, 2, ..., 1, 1, 2], [2, 2, 2, ..., 2, 2, 2]])
 scaling(chain, draw, scaling_dim_0)float641.464 2.662 1.464 ... 1.331 2.358
array([[[1.4641 , 2.662 ], [1.4641 , 2.662 ], [1.4641 , 2.662 ], ..., [1.4641 , 2.662 ], [1.4641 , 2.662 ], [1.4641 , 2.662 ]], [[1.331 , 2.35794769], [1.331 , 2.35794769], [1.331 , 2.35794769], ..., [1.331 , 2.35794769], [1.331 , 2.35794769], [1.331 , 2.35794769]]])
 process_time_diff(chain, draw)float640.000612 0.00028 ... 0.000477
array([[0.000612, 0.00028 , 0.000512, ..., 0.00048 , 0.000462, 0.000682], [0.000659, 0.000737, 0.000566, ..., 0.000642, 0.000668, 0.000477]])
 lp(chain, draw)float64177.8 177.0 ... 179.1 175.6
array([[177.84833139, 176.96998605, 177.62208958, ..., 177.33637597, 178.41132849, 176.00996052], [177.04631713, 177.91064831, 176.3128248 , ..., 178.69018227, 179.11188422, 175.55994681]])
 perf_counter_diff(chain, draw)float640.0006293 0.0002809 ... 0.0004771
array([[0.00062935, 0.00028094, 0.00051247, ..., 0.00048048, 0.00046108, 0.00102844], [0.00066692, 0.00077655, 0.00056549, ..., 0.00066893, 0.00068386, 0.00047712]])
 accept(chain, draw, accept_dim_0)float640.3994 1.094 2.314 ... 5.687 2.969
array([[[0.39939657, 1.09440673], [2.31412349, 0.720068 ], [1.68987729, 0.05915239], ..., [0. , 0.89222621], [0.34131396, 0.18869271], [2.92985378, 0.28010575]], [[0.27690895, 0.63139657], [0.03377025, 0.43825176], [2.89674824, 1.64828846], ..., [0.82312568, 1. ], [0.41156284, 2.70066777], [5.68686693, 2.96930198]]])
 energy_error(chain, draw)float640.005702 0.01118 ... 0.2271
array([[0.00570162, 0.01118174, 0.30225096, ..., 0.01023484, 0. , 0.42615453], [ 0.04211059, 0.01095353, 0.05170346, ..., 0. , 0.17401783, 0.22711861]])
 max_energy_error(chain, draw)float641.129 0.01118 ... 0.174 0.2271
array([[ 1.12866425, 0.01118174, 0.7498719 , ..., 0.01023484, 0.35802402, 0.42615453], [ 0.35504171, 0.02885821, 0.82411198, ..., 0.35797759, 0.17401783, 0.22711861]])
 mean_tree_accept(chain, draw)float640.7362 1.0 0.7054 ... 0.9566 0.9841
array([[0.7361729 , 1. , 0.70539919, ..., 1. , 0.69905628, 1. ], [0.84416844, 0.98705606, 0.70683012, ..., 0.70433264, 0.95660635, 0.98413996]])
 created_at :
 20210208T06:29:28.933359
 arviz_version :
 0.11.0
 inference_library :
 pymc3
 inference_library_version :
 3.11.0
 sampling_time :
 33.77551817893982
 tuning_steps :
 1000
<xarray.Dataset> Dimensions: (accept_dim_0: 2, accepted_dim_0: 2, chain: 2, draw: 10000, scaling_dim_0: 2) Coordinates: * chain (chain) int64 0 1 * draw (draw) int64 0 1 2 3 4 5 ... 9995 9996 9997 9998 9999 * accepted_dim_0 (accepted_dim_0) int64 0 1 * scaling_dim_0 (scaling_dim_0) int64 0 1 * accept_dim_0 (accept_dim_0) int64 0 1 Data variables: perf_counter_start (chain, draw) float64 481.6 481.6 481.6 ... 498.6 498.6 accepted (chain, draw, accepted_dim_0) bool False True ... True diverging (chain, draw) bool False False False ... False False step_size (chain, draw) float64 0.8683 0.8683 ... 1.078 1.078 tree_size (chain, draw) float64 3.0 1.0 3.0 3.0 ... 3.0 3.0 3.0 step_size_bar (chain, draw) float64 1.135 1.135 1.135 ... 1.112 1.112 energy (chain, draw) float64 180.8 177.0 178.7 ... 179.2 176.4 depth (chain, draw) int64 2 1 2 2 1 2 2 2 ... 2 1 2 2 1 2 2 2 scaling (chain, draw, scaling_dim_0) float64 1.464 ... 2.358 process_time_diff (chain, draw) float64 0.000612 0.00028 ... 0.000477 lp (chain, draw) float64 177.8 177.0 ... 179.1 175.6 perf_counter_diff (chain, draw) float64 0.0006293 0.0002809 ... 0.0004771 accept (chain, draw, accept_dim_0) float64 0.3994 ... 2.969 energy_error (chain, draw) float64 0.005702 0.01118 ... 0.2271 max_energy_error (chain, draw) float64 1.129 0.01118 ... 0.174 0.2271 mean_tree_accept (chain, draw) float64 0.7362 1.0 ... 0.9566 0.9841 Attributes: created_at: 20210208T06:29:28.933359 arviz_version: 0.11.0 inference_library: pymc3 inference_library_version: 3.11.0 sampling_time: 33.77551817893982 tuning_steps: 1000
xarray.Dataset 
 disasters_dim_0: 111
 disasters_dim_0(disasters_dim_0)int640 1 2 3 4 5 ... 106 107 108 109 110
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110])
 disasters(disasters_dim_0)float644.0 5.0 4.0 0.0 ... 0.0 1.0 0.0 1.0
array([ 4., 5., 4., 0., 1., 4., 3., 4., 0., 6., 3., 3., 4., 0., 2., 6., 3., 3., 5., 4., 5., 3., 1., 4., 4., 1., 5., 5., 3., 4., 2., 5., 2., 2., 3., 4., 2., 1., 3., nan, 2., 1., 1., 1., 1., 3., 0., 0., 1., 0., 1., 1., 0., 0., 3., 1., 0., 3., 2., 2., 0., 1., 1., 1., 0., 1., 0., 1., 0., 0., 0., 2., 1., 0., 0., 0., 1., 1., 0., 2., 3., 3., 1., nan, 2., 1., 1., 1., 1., 2., 4., 2., 0., 0., 1., 4., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 1.])
 created_at :
 20210208T06:29:30.683682
 arviz_version :
 0.11.0
 inference_library :
 pymc3
 inference_library_version :
 3.11.0
<xarray.Dataset> Dimensions: (disasters_dim_0: 111) Coordinates: * disasters_dim_0 (disasters_dim_0) int64 0 1 2 3 4 5 ... 106 107 108 109 110 Data variables: disasters (disasters_dim_0) float64 4.0 5.0 4.0 0.0 ... 1.0 0.0 1.0 Attributes: created_at: 20210208T06:29:30.683682 arviz_version: 0.11.0 inference_library: pymc3 inference_library_version: 3.11.0
xarray.Dataset
[27]:
with disaster_model:
axes_arr = az.plot_trace(trace)
plt.draw()
for ax in axes_arr.flatten():
if ax.get_title() == "switchpoint":
labels = [label.get_text() for label in ax.get_xticklabels()]
ax.set_xticklabels(labels, rotation=45, ha="right")
break
plt.draw()
Note that the rate
random variable does not appear in the trace. That is fine in this case, because it is not of interest in itself. However, if there is a deterministic random variable that one does want to see in the trace, this can be achieved by putting the definition of that variable into pm.Deterministic
, and giving it a name, as follows:
rate = pm.Deterministic("rate", pm.math.switch(switchpoint >= years, early_rate, late_rate))
For more details, see the API documentation.
The following plot shows the switch point as an orange vertical line, together with its HPD as a semitransparent band. The dashed black line shows the accident rate.
[28]:
plt.figure(figsize=(10, 8))
plt.plot(years, disaster_data, ".", alpha=0.6)
plt.ylabel("Number of accidents", fontsize=16)
plt.xlabel("Year", fontsize=16)
plt.vlines(trace["switchpoint"].mean(), disaster_data.min(), disaster_data.max(), color="C1")
average_disasters = np.zeros_like(disaster_data, dtype="float")
for i, year in enumerate(years):
idx = year < trace["switchpoint"]
average_disasters[i] = np.mean(np.where(idx, trace["early_rate"], trace["late_rate"]))
sp_hpd = az.hdi(trace["switchpoint"])
plt.fill_betweenx(
y=[disaster_data.min(), disaster_data.max()],
x1=sp_hpd[0],
x2=sp_hpd[1],
alpha=0.5,
color="C1",
)
plt.plot(years, average_disasters, "k", lw=2);
Arbitrary deterministics¶
Due to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore Theano and PyMC3 provide functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC models. This is supported with the as_op
function decorator.
Theano needs to know the types of the inputs and outputs of a function, which are specified for as_op
by itypes
for inputs and otypes
for outputs. The Theano documentation includes an overview of the available types.
[29]:
import theano.tensor as tt
from theano.compile.ops import as_op
@as_op(itypes=[tt.lscalar], otypes=[tt.lscalar])
def crazy_modulo3(value):
if value > 0:
return value % 3
else:
return (value + 1) % 3
with pm.Model() as model_deterministic:
a = pm.Poisson("a", 1)
b = crazy_modulo3(a)
An important drawback of this approach is that it is not possible for theano
to inspect these functions in order to compute the gradient required for the Hamiltonianbased samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op
instead of using as_op
. The PyMC example set includes a more elaborate example of the usage of
as_op.
Arbitrary distributions¶
Similarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC3 allows for the creation of userdefined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist
function takes as an argument any function that calculates a logprobability \(log(p(x))\). This function may employ other random variables in its calculation. Here is an example inspired by a blog post by Jake Vanderplas on which priors to use for a
linear regression (Vanderplas, 2014).
import theano.tensor as tt
with pm.Model() as model:
alpha = pm.Uniform('intercept', 100, 100)
# Create custom densities
beta = pm.DensityDist('beta', lambda value: 1.5 * tt.log(1 + value**2), testval=0)
eps = pm.DensityDist('eps', lambda value: tt.log(tt.abs_(value)), testval=1)
# Create likelihood
like = pm.Normal('y_est', mu=alpha + beta * X, sigma=eps, observed=Y)
For more complex distributions, one can create a subclass of Continuous
or Discrete
and provide the custom logp
function, as required. This is how the builtin distributions in PyMC are specified. As an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. In these cases, it is impossible to write the function in terms of predefined theano operators and we must use a custom theano operator
using as_op
or inheriting from theano.Op
.
Implementing the beta
variable above as a Continuous
subclass is shown below, along with a subfunction.
[30]:
class Beta(pm.Continuous):
def __init__(self, mu, *args, **kwargs):
super().__init__(*args, **kwargs)
self.mu = mu
self.mode = mu
def logp(self, value):
mu = self.mu
return beta_logp(value  mu)
def beta_logp(value):
return 1.5 * np.log(1 + (value) ** 2)
with pm.Model() as model:
beta = Beta("slope", mu=0, testval=0)
If your logp can not be expressed in Theano, you can decorate the function with as_op
as follows: @as_op(itypes=[tt.dscalar], otypes=[tt.dscalar])
. Note, that this will create a blackbox Python function that will be much slower and not provide the gradients necessary for e.g. NUTS.
Generalized Linear Models¶
Generalized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3
offers a glm
submodule that allows flexible creation of various GLMs with an intuitive R
like syntax that is implemented via the patsy
module.
The glm
submodule requires data to be included as a pandas
DataFrame
. Hence, for our linear regression example:
[31]:
# Convert X and Y to a pandas DataFrame
df = pd.DataFrame({"x1": X1, "x2": X2, "y": Y})
The model can then be very concisely specified in one line of code.
[32]:
from pymc3.glm import GLM
with pm.Model() as model_glm:
GLM.from_formula("y ~ x1 + x2", df)
trace = pm.sample()
/Users/CloudChaoszero/Documents/ProjectsDev/pymc3/pymc3/sampling.py:465: FutureWarning: In an upcoming release, pm.sample will return an `arviz.InferenceData` object instead of a `MultiTrace` by default. You can pass return_inferencedata=True or return_inferencedata=False to be safe and silence this warning.
warnings.warn(
Autoassigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sd, x2, x1, Intercept]
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 20 seconds.
The error distribution, if not specified via the family
argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial
family object.
[33]:
from pymc3.glm.families import Binomial
df_logistic = pd.DataFrame({"x1": X1, "y": Y > np.median(Y)})
with pm.Model() as model_glm_logistic:
GLM.from_formula("y ~ x1", df_logistic, family=Binomial())
For a more complete and flexible formula interface, including hierarchical GLMs, see Bambi.
Discussion¶
Probabilistic programming is an emerging paradigm in statistical learning, of which Bayesian modeling is an important subdiscipline. The signature characteristics of probabilistic programming–specifying variables as probability distributions and conditioning variables on other variables and on observations–makes it a powerful tool for building models in a variety of settings, and over a range of model complexity. Accompanying the rise of probabilistic programming has been a burst of innovation in fitting methods for Bayesian models that represent notable improvement over existing MCMC methods. Yet, despite this expansion, there are few software packages available that have kept pace with the methodological innovation, and still fewer that allow nonexpert users to implement models.
PyMC3 provides a probabilistic programming platform for quantitative researchers to implement statistical models flexibly and succinctly. A large library of statistical distributions and several predefined fitting algorithms allows users to focus on the scientific problem at hand, rather than the implementation details of Bayesian modeling. The choice of Python as a development language, rather than a domainspecific language, means that PyMC3 users are able to work interactively to build models, introspect model objects, and debug or profile their work, using a dynamic, highlevel programming language that is easy to learn. The modular, objectoriented design of PyMC3 means that adding new fitting algorithms or other features is straightforward. In addition, PyMC3 comes with several features not found in most other packages, most notably Hamiltonianbased samplers as well as automatical transforms of constrained random variables which is only offered by Stan. Unlike Stan, however, PyMC3 supports discrete variables as well as nongradient based sampling algorithms like MetropolisHastings and Slice sampling.
Development of PyMC3 is an ongoing effort and several features are planned for future versions. Most notably, variational inference techniques are often more efficient than MCMC sampling, at the cost of generalizability. More recently, however, blackbox variational inference algorithms have been developed, such as automatic differentiation variational inference (ADVI; Kucukelbir et al., 2017). This algorithm is slated for addition to PyMC3. As an opensource scientific computing toolkit, we encourage researchers developing new fitting algorithms for Bayesian models to provide reference implementations in PyMC3. Since samplers can be written in pure Python code, they can be implemented generally to make them work on arbitrary PyMC3 models, giving authors a larger audience to put their methods into use.
References¶
Patil, A., D. Huard and C.J. Fonnesbeck. (2010) PyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical Software, 35(4), pp. 181
Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I., Bergeron, A., Bouchard, N., WardeFarley, D., and Bengio, Y. (2012) “Theano: new features and speed improvements”. NIPS 2012 deep learning workshop.
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., WardeFarley, D., and Bengio, Y. (2010) “Theano: A CPU and GPU Math Expression Compiler”. Proceedings of the Python for Scientific Computing Conference (SciPy) 2010. June 30  July 3, Austin, TX
Lunn, D.J., Thomas, A., Best, N., and Spiegelhalter, D. (2000) WinBUGS – a Bayesian modelling framework: concepts, structure, and extensibility. Statistics and Computing, 10:325–337.
Neal, R.M. Slice sampling. Annals of Statistics. (2003). doi:10.2307/3448413.
van Rossum, G. The Python Library Reference Release 2.6.5., (2010). URL http://docs.python.org/library/.
Duane, S., Kennedy, A. D., Pendleton, B. J., and Roweth, D. (1987) “Hybrid Monte Carlo”, Physics Letters, vol. 195, pp. 216222.
Stan Development Team. (2014). Stan: A C++ Library for Probability and Sampling, Version 2.5.0. http://mcstan.org.
Gamerman, D. Markov Chain Monte Carlo: statistical simulation for Bayesian inference. Chapman and Hall, 1997.
Hoffman, M. D., & Gelman, A. (2014). The NoUTurn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo. The Journal of Machine Learning Research, 30.
Kucukelbir A, Dustin Tran, Ranganath R, Gelman A, and Blei DM. Automatic differentiation variational inference http://arxiv.org/abs/1506.03431, The Journal of Machine Learning Research. 18 , pp. 430474 .
Vanderplas, Jake. “Frequentism and Bayesianism IV: How to be a Bayesian in Python.” Pythonic Perambulations. N.p., 14 Jun 2014. Web. 27 May. 2015. https://jakevdp.github.io/blog/2014/06/14/frequentismandbayesianism4bayesianinpython/.
R.G. Jarrett. A note on the intervals between coal mining disasters. Biometrika, 66:191–193, 1979.
[34]:
%load_ext watermark
%watermark n u v iv w
Last updated: Sun Feb 07 2021
Python implementation: CPython
Python version : 3.8.6
IPython version : 7.20.0
theano : 1.1.2
numpy : 1.20.0
pandas : 1.2.1
arviz : 0.11.0
pymc3 : 3.11.0
matplotlib: None
Watermark: 2.1.0