How does pymc represent the prior distribution and likelihood function?
Asked Answered
M

1

21

If pymc implements the Metropolis-Hastings algorithm to come up with samples from the posterior density over the parameters of interest, then in order to decide whether to move to the next state in the markov chain it must be able to evaluate something proportional to the posterior density for all given parameter values.

The posterior density is proportion to the likelihood function based on the observed data times the prior density.

How are each of these represented within pymc? How does it calculate each of these quantities from the model object?

I wonder if anyone can give me a high level description of the approach or point me to where I can find it.

Michal answered 23/5, 2013 at 20:20 Comment(2)
Taking into account that nobody seems to be able to answer you, I suggest asking here: github.com/pymc-devs/pymc/issuesCamillacamille
This seems like a job for the source. It's relatively short, and with your apparent understanding of the algorithm, perhaps a quick look will be more illuminating for you than it was for me.Lemuellemuela
G
3

To represent the prior, you need an instance of the Stochastic class, which has two primary attributes:

value : the variable's current value
logp : the log probability of the variable's current value given the values of its parents

You can initialize a prior with the name of the distribution you are using.

To represent the likelihood, you need a so-called Data Stochastic. That is, an instance of class Stochastic whose observed flag is set to True. The value of this variable cannot be changed and it will not be sampled. Again, you can initialize the likelihood with the name of the distribution you are using (but don't forget to set the observed flag to True).

Say we have the following setup:

import pymc as pm
import numpy as np
import theano.tensor as t

x = np.array([1,2,3,4,5,6])
y = np.array([0,1,0,1,1,1])

We can run a simple logistic regression with the following:

with pm.Model() as model:
    #Priors
    b0 = pm.Normal("b0", mu=0, tau=1e-6)
    b1 = pm.Normal("b1", mu=0, tau=1e-6)
    #Likelihood
    z = b0 + b1 * x
    yhat = pm.Bernoulli("yhat", 1 / (1 + t.exp(-z)), observed=y)
    # Sample from the posterior
    trace = pm.sample(10000, pm.Metropolis())

Most of the above came from Chris Fonnesbeck's iPython notebook here.

Gad answered 28/3, 2014 at 3:19 Comment(3)
While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes.Irritated
Ben, you are mixing pymc2 and pymc3 notation. observed=true was for pymc2. Then one had to give the value in value. pymc3 joins these two keywords and the data are given directly with observed=y, which implies (observed=true).Schell
what if b0 is a vector and all its elements are independent. i.e. the prior can be written as product of densities? How do you write this in pymc?Bourges

© 2022 - 2024 — McMap. All rights reserved.