# Using shared variables (Data container adaptation)¶

[1]:

%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm

import seaborn as sns
import matplotlib.pyplot as plt
import arviz as az

sns.set_context('notebook')
plt.style.use('seaborn-darkgrid')
print('Running on PyMC3 v{}'.format(pm.__version__))

Running on PyMC3 v3.6


## The Data class¶

The Data container class wraps the theano shared variable class and let the model be aware of its inputs and outputs. This allows one to change the value of an observed variable to predict or refit on new data. All variables of this class must be declared inside a model context and specify a name for them.

Let’s see how to declare one inside a simple toy model:

[2]:

true_mu = 30
observed_data = true_mu + np.random.randn(10)

with pm.Model() as model:
data = pm.Data('data', observed_data)
mu = pm.Normal('mu', 0, 10)
pm.Normal('y', mu=mu, sigma=1, observed=data)
trace = pm.sample()

Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6621.27draws/s]
The acceptance probability does not match the target. It is 0.883289588766623, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.8824465542593197, but should be close to 0.8. Try to increase the number of tuning steps.

[3]:

az.plot_trace(trace);


We can get the data container variable from the model using:

[4]:

model['data'].get_value()

[4]:

array([30.43311825, 31.02348975, 29.748085  , 31.03500693, 29.59041138,
29.19589461, 29.52316029, 29.79871833, 30.06108278, 30.89038443])


Note that we used a theano SharedVariable method to get the value of the variable. This is because our variable is actually a SharedVariable.

[5]:

type(data)

[5]:

theano.tensor.sharedvar.TensorSharedVariable


We can plot the digraph for our model using:

[6]:

pm.model_to_graphviz(model)

[6]:


The methods and functions related to the Data container class are:

• data_container.get_value (method inherited from the theano SharedVariable): gets the value associated with the data_container.

• data_container.set_value (method inherited from the theano SharedVariable): sets the value associated with the data_container.

• pm.set_data: PyMC3 function that sets the value associated with each Data container variable indicated in the dictionary new_data with it corresponding new value.

[7]:

help(pm.set_data)

Help on function set_data in module pymc3.model:

set_data(new_data, model=None)
Sets the value of one or more data container variables.

Parameters
----------
new_data : dict
New values for the data containers. The keys of the dictionary are
the  variables names in the model and the values are the objects
with which to update.
model : Model (optional if in with context)

Examples
--------

.. code:: ipython

>>> import pymc3 as pm
>>> with pm.Model() as model:
...     x = pm.Data('x', [1., 2., 3.])
...     y = pm.Data('y', [1., 2., 3.])
...     beta = pm.Normal('beta', 0, 1)
...     obs = pm.Normal('obs', x * beta, 1, observed=y)
...     trace = pm.sample(1000, tune=1000)

Set the value of x to predict on new data.

.. code:: ipython
>>> with model:
...     pm.set_data({'x': [5,6,9]})
...     y_test = pm.sample_posterior_predictive(trace)
>>> y_test['obs'].mean(axis=0)
array([4.6088569 , 5.54128318, 8.32953844])



## Using Data container variables to fit the same model to several datasets¶

This and the next section are an adaptation of the notebook “Advanced usage of Theano in PyMC3” using pm.Data.

We can use Data container variables in PyMC3 to fit the same model to several datasets without the need to recreate the model each time (which can be time consuming if the number of datasets is large):

[8]:

# We generate 10 datasets
true_mu = [np.random.randn() for _ in range(10)]
observed_data = [mu + np.random.randn(20) for mu in true_mu]

with pm.Model() as model:
data = pm.Data('data', observed_data[0])
mu = pm.Normal('mu', 0, 10)
pm.Normal('y', mu=mu, sigma=1, observed=data)

# Generate one trace for each dataset
traces = []
for data_vals in observed_data:
with model:
# Switch out the observed dataset
pm.set_data({'data': data_vals})
traces.append(pm.sample())

Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6296.26draws/s]
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6494.94draws/s]
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6775.41draws/s]
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6089.71draws/s]
The acceptance probability does not match the target. It is 0.8813907103392655, but should be close to 0.8. Try to increase the number of tuning steps.
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6589.94draws/s]
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6712.69draws/s]
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6858.88draws/s]
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6772.62draws/s]
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 7330.82draws/s]
The acceptance probability does not match the target. It is 0.8863523953279285, but should be close to 0.8. Try to increase the number of tuning steps.
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 7020.24draws/s]


## Using Data container variables to predict on new data¶

We can also sometimes use Data container variables to work around limitations in the current PyMC3 API. A common task in Machine Learning is to predict values for unseen data, and one way to achieve this is to use a Data container variable for our observations:

[9]:

x = np.random.randn(100)
y = x > 0

with pm.Model() as model:
x_shared = pm.Data('x_shared', x)
coeff = pm.Normal('x', mu=0, sigma=1)
logistic = pm.math.sigmoid(coeff * x_shared)
pm.Bernoulli('obs', p=logistic, observed=y)

# fit the model
trace = pm.sample()

Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [x]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 6448.45draws/s]
The acceptance probability does not match the target. It is 0.8806955576016666, but should be close to 0.8. Try to increase the number of tuning steps.

[10]:

new_values = [-1, 0, 1.]
with model:
# Switch out the observations and use sample_posterior_predictive to predict
pm.set_data({'x_shared': new_values})
post_pred = pm.sample_posterior_predictive(trace, samples=500)

100%|██████████| 500/500 [00:00<00:00, 1967.56it/s]


The same concept applied to a more complex model can be seen in the notebook “Variational Inference: Bayesian Neural Networks”.

## Applied example: height of toddlers as a function of age¶

This example is taken from Osvaldo Martin’s book: Bayesian Analysis with Python: Introduction to statistical modeling and probabilistic programming using PyMC3 and ArviZ, 2nd Edition.

The World Health Organization and other health institutions around the world collect data for newborns and toddlers and design growth charts standards. These charts are an essential component of the paediatric toolkit and also as a measure of the general well-being of populations in order to formulate health policies, and plan interventions and monitor their effectiveness.

An example of such data is the lengths (heights) of newborn / toddler girls as a function of age (in months):

[11]:

data = pd.read_csv(pm.get_data('babies.csv'))
data.plot.scatter('Month', 'Length');


To model this data we are going to use this model:

[12]:

with pm.Model() as model_babies:
α = pm.Normal('α', sigma=10)
β = pm.Normal('β', sigma=10)
γ = pm.HalfNormal('γ', sigma=10)
δ = pm.HalfNormal('δ', sigma=10)

month = pm.Data('month', data.Month.values)

μ = pm.Deterministic('μ', α + β * month**0.5)
ε = pm.Deterministic('ε', γ + δ * month)
y_pred = pm.Normal('y_pred', mu=μ, sigma=ε, observed=data.Length)

trace_babies = pm.sample(1000)

Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [δ, γ, β, α]
Sampling 4 chains: 100%|██████████| 6000/6000 [00:03<00:00, 1954.48draws/s]
The acceptance probability does not match the target. It is 0.8950762339991175, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.8873582937209513, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.8807358345108821, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.8836087176127054, but should be close to 0.8. Try to increase the number of tuning steps.


The following figure shows the result of our model. The mean of $$\mu$$ is represented with a black curve, and two semitransparent orange bands represent 1 and 2 standard deviations.

[13]:

plt.plot(data.Month, data.Length, 'C0.', alpha=0.15);
μ_m = trace_babies['μ'].mean(0)
ε_m = trace_babies['ε'].mean(0)
plt.plot(data.Month, μ_m, c='k')
plt.fill_between(data.Month, μ_m + 1 * ε_m, μ_m - 1 * ε_m, alpha=0.6,
color='C1')
plt.fill_between(data.Month, μ_m + 2 * ε_m, μ_m - 2 * ε_m, alpha=0.4,
color='C1')
plt.xlabel('Month')
plt.ylabel('Length');


At the moment of writing Osvaldo’s daughter is two weeks ($$\approx 0.5$$ months) old, and thus he wonders how her length compares to the growth chart we have just created. One way to answer this question is to ask the model for the distribution of the variable length for babies of 0.5 months. Using PyMC3 we can ask this questions with the function sample_posterior_predictive , as this will return samples of Length conditioned on the obseved data and the estimated distribution of parameters, that is including uncertainties. The only problem is that by default this function will return predictions for Length for the observed values of Month, and $$0.5$$ months (the value he cares) has not been observed, all measures are reported for integer months. The easier way to get predictions for non-observed values of Month is to define a Data container variable (as part of the model) and then update the value of the variable right before sampling from the posterior predictve distribution.

[14]:

with model_babies:
pm.set_data({'month': [0.5]})
y_ppc = pm.sample_posterior_predictive(trace_babies, 2000)

y_ppc = y_ppc['y_pred'][:,0]

100%|██████████| 2000/2000 [00:00<00:00, 2195.67it/s]


Now we can plot the expected distribution of lengths for babies with 2 weeks old and compute additional quantities for example the percentile of a child given her length:

[15]:

ref = 51.5
density, l, u = az.plots._fast_kde(y_ppc)
x_ = np.linspace(l, u, 200)
plt.plot(x_, density)
percentile = int(sum(y_ppc <= ref) / len(y_ppc) * 100)
plt.fill_between(x_[x_ < ref], density[x_ < ref], label='Percentile = {:2d}'.format(percentile))
plt.xlabel('Length')
plt.yticks([])
plt.legend();