Marginalized Gaussian Mixture Model

Author: Austin Rochford

[1]:
import arviz as az
import numpy as np
import pymc3 as pm
import seaborn as sns

from matplotlib import pyplot as plt

print(f"Running on PyMC3 v{pm.__version__}")
Running on PyMC3 v3.9.3
[2]:
%config InlineBackend.figure_format = 'retina'
RANDOM_SEED = 8927
np.random.seed(RANDOM_SEED)
az.style.use("arviz-darkgrid")

Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.

[3]:
N = 1000

W = np.array([0.35, 0.4, 0.25])

MU = np.array([0.0, 2.0, 5.0])
SIGMA = np.array([0.5, 0.5, 1.0])
[4]:
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
[5]:
fig, ax = plt.subplots(figsize=(8, 6))

ax.hist(x, bins=30, density=True, lw=0);
../_images/notebooks_marginalized_gaussian_mixture_model_6_0.png

A natural parameterization of the Gaussian mixture model is as the latent variable model

\[\begin{split}\begin{align*} \mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\ \tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\ \boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\ z\ |\ \boldsymbol{w} & \sim \textrm{Cat}(\boldsymbol{w}) \\ x\ |\ z & \sim N(\mu_z, \tau^{-1}_z). \end{align*}\end{split}\]

An implementation of this parameterization in PyMC3 is available here. A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable \(z\). This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.

An alternative, equivalent parameterization that addresses these problems is to marginalize over \(z\). The marginalized model is

\[\begin{split}\begin{align*} \mu_1, \ldots, \mu_K & \sim N(0, \sigma^2) \\ \tau_1, \ldots, \tau_K & \sim \textrm{Gamma}(a, b) \\ \boldsymbol{w} & \sim \textrm{Dir}(\boldsymbol{\alpha}) \\ f(x\ |\ \boldsymbol{w}) & = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_i), \end{align*}\end{split}\]

where

\[N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)\]

is the probability density function of the normal distribution.

Marginalizing \(z\) out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the Stan community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the Stan User’s Guide and Reference Manual.

PyMC3 supports marginalized Gaussian mixture models through its NormalMixture class. (It also supports marginalized general mixture models through its Mixture class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.

[6]:
with pm.Model() as model:
    w = pm.Dirichlet("w", np.ones_like(W))

    mu = pm.Normal("mu", 0.0, 10.0, shape=W.size)
    tau = pm.Gamma("tau", 1.0, 1.0, shape=W.size)

    x_obs = pm.NormalMixture("x_obs", w, mu, tau=tau, observed=x)
[7]:
with model:
    trace = pm.sample(5000, n_init=10000, tune=1000)

    # sample posterior predictive samples
    ppc_trace = pm.sample_posterior_predictive(trace, var_names=["x_obs"])

    # Get an arviz inference object
    idata_pymc3 = az.from_pymc3(trace, posterior_predictive=ppc_trace)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [tau, mu, w]
100.00% [12000/12000 00:34<00:00 Sampling 2 chains, 0 divergences]
Sampling 2 chains for 1_000 tune and 5_000 draw iterations (2_000 + 10_000 draws total) took 35 seconds.
The rhat statistic is larger than 1.4 for some parameters. The sampler did not converge.
The estimated number of effective samples is smaller than 200 for some parameters.
100.00% [10000/10000 06:44<00:00]

We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.

[9]:
az.plot_trace(idata_pymc3, var_names=["w", "mu"]);
../_images/notebooks_marginalized_gaussian_mixture_model_11_0.png
[10]:
az.plot_posterior(idata_pymc3, var_names=["w", "mu"]);
../_images/notebooks_marginalized_gaussian_mixture_model_12_0.png

We see that the posterior predictive samples have a distribution quite close to that of the observed data.

[11]:
az.plot_ppc(idata_pymc3)
[11]:
array([<matplotlib.axes._subplots.AxesSubplot object at 0x7fc7b7eb3dc0>],
      dtype=object)
../_images/notebooks_marginalized_gaussian_mixture_model_14_1.png
[12]:
%load_ext watermark
%watermark -n -u -v -iv -w
seaborn 0.10.1
numpy   1.18.5
arviz   0.9.0
pymc3   3.9.3
last updated: Fri Sep 11 2020

CPython 3.8.3
IPython 7.16.1
watermark 2.0.2