Modified BPMF in PyMC3 using `LKJCorr` priors: PositiveDefiniteError using `NUTS`
Asked Answered
A

1

6

I previously implemented the original Bayesian Probabilistic Matrix Factorization (BPMF) model in pymc3. See my previous question for reference, data source, and problem setup. Per the answer to that question from @twiecki, I've implemented a variation of the model using LKJCorr priors for the correlation matrices and uniform priors for the standard deviations. In the original model, the covariance matrices are drawn from Wishart distributions, but due to current limitations of pymc3, the Wishart distribution cannot be sampled from properly. This answer to a loosely related question provides a succinct explanation for the choice of LKJCorr priors. The new model is below.

import pymc3 as pm
import numpy as np
import theano.tensor as t


n, m = train.shape
dim = 10  # dimensionality
beta_0 = 1  # scaling factor for lambdas; unclear on its use
alpha = 2  # fixed precision for likelihood function
std = .05  # how much noise to use for model initialization

# We will use separate priors for sigma and correlation matrix.
# In order to convert the upper triangular correlation values to a
# complete correlation matrix, we need to construct an index matrix:
n_elem = dim * (dim - 1) / 2
tri_index = np.zeros([dim, dim], dtype=int)
tri_index[np.triu_indices(dim, k=1)] = np.arange(n_elem)
tri_index[np.triu_indices(dim, k=1)[::-1]] = np.arange(n_elem)

logging.info('building the BPMF model')
with pm.Model() as bpmf:
    # Specify user feature matrix
    sigma_u = pm.Uniform('sigma_u', shape=dim)
    corr_triangle_u = pm.LKJCorr(
        'corr_u', n=1, p=dim,
        testval=np.random.randn(n_elem) * std)

    corr_matrix_u = corr_triangle_u[tri_index]
    corr_matrix_u = t.fill_diagonal(corr_matrix_u, 1)
    cov_matrix_u = t.diag(sigma_u).dot(corr_matrix_u.dot(t.diag(sigma_u)))
    lambda_u = t.nlinalg.matrix_inverse(cov_matrix_u)

    mu_u = pm.Normal(
        'mu_u', mu=0, tau=beta_0 * lambda_u, shape=dim,
         testval=np.random.randn(dim) * std)
    U = pm.MvNormal(
        'U', mu=mu_u, tau=lambda_u,
        shape=(n, dim), testval=np.random.randn(n, dim) * std)

    # Specify item feature matrix
    sigma_v = pm.Uniform('sigma_v', shape=dim)
    corr_triangle_v = pm.LKJCorr(
        'corr_v', n=1, p=dim,
        testval=np.random.randn(n_elem) * std)

    corr_matrix_v = corr_triangle_v[tri_index]
    corr_matrix_v = t.fill_diagonal(corr_matrix_v, 1)
    cov_matrix_v = t.diag(sigma_v).dot(corr_matrix_v.dot(t.diag(sigma_v)))
    lambda_v = t.nlinalg.matrix_inverse(cov_matrix_v)

    mu_v = pm.Normal(
        'mu_v', mu=0, tau=beta_0 * lambda_v, shape=dim,
         testval=np.random.randn(dim) * std)
    V = pm.MvNormal(
        'V', mu=mu_v, tau=lambda_v,
        testval=np.random.randn(m, dim) * std)

    # Specify rating likelihood function
    R = pm.Normal(
        'R', mu=t.dot(U, V.T), tau=alpha * np.ones((n, m)),
        observed=train)

# `start` is the start dictionary obtained from running find_MAP for PMF.
# See the previous post for PMF code.
for key in bpmf.test_point:
    if key not in start:
        start[key] = bpmf.test_point[key]

with bpmf:
    step = pm.NUTS(scaling=start)

The goal with this reimplementation was to produce a model that could be estimated using the NUTS sampler. Unfortunately, I'm still getting the same error at the last line:

PositiveDefiniteError: Scaling is not positive definite. Simple check failed. Diagonal contains negatives. Check indexes [   0    1    2    3    ...   1030 1031 1032 1033 1034   ]

I've made all the code for PMF, BPMF, and this modified BPMF available in this gist to make it simple to replicate the error. All you need to do is download the data (also referenced in the gist).

Assentation answered 20/4, 2015 at 22:48 Comment(0)
B
3

It looks like you are passing the complete precision matrix into the normal distribution:

mu_u = pm.Normal(
    'mu_u', mu=0, tau=beta_0 * lambda_u, shape=dim,
     testval=np.random.randn(dim) * std)

I assume you only want to pass the diagonal values:

mu_u = pm.Normal(
    'mu_u', mu=0, tau=beta_0 * t.diag(lambda_u), shape=dim,
     testval=np.random.randn(dim) * std)

Does this change to mu_u and mu_v fix it for you?

Bahadur answered 21/4, 2015 at 8:46 Comment(1)
Yes, that resolves the issue. I've updated the gist accordingly.Assentation

© 2022 - 2024 — McMap. All rights reserved.