Fit a bayesian linear regression and predict unobservable values
Asked Answered
J

1

2

I'd like to use Jags plus R to adjust a linear model with observable quantities, and make inference about unobservable ones. I found lots of example on the internet about how to adjust the model, but nothing on how to extrapolate its coefficients after having fitted the model in the Jags environment. So, I'll appreciate any help on this.

My data looks like the following:

ngroups <- 2
group <- 1:ngroups
nobs <- 100
dta <- data.frame(group=rep(group,each=nobs),y=rnorm(nobs*ngroups),x=runif(nobs*ngroups))
head(dta)
Jimjams answered 12/11, 2015 at 1:31 Comment(1)
Do you really want an informative prior? If not, just use the lm and predict functions.Septuplet
M
4

JAGS has powerful ways to make inference about missing data, and once you get the hang of it, it's easy! I strongly recommend that you check out Marc Kéry's excellent book which provides a wonderful introduction to BUGS language programming (JAGS is close enough to BUGS that almost everything transfers).

The easiest way to do this involves, as you say, adjusting the model. Below I provide a complete worked example of how this works. But you seem to be asking for a way to get the prediction interval without re-running the model (is your model very large and computationally expensive?). This can also be done.
How to predict--the hard way (without re-running the model) For each iteration of the MCMC, simulate the response for the desired x-value based on that iteration's posterior draws for the covariate values. So imagine you want to predict a value for X=10. Then if iteration 1 (post burn-in) has slope=2, intercept=1, and standard deviation=0.5, draw a Y-value from

Y=rnorm(1, 1+2*10, 0.5)  

And repeat for iteration 2, 3, 4, 5... These will be your posterior draws for the response at X=10. Note: if you did not monitor the standard deviation in your JAGS model, you are out of luck and need to fit the model again.

How to predict--the easy way--with worked example The basic idea is to insert (into your data) the x-values whose responses you want to predict, with the associated y-values NA. For example, if you want a prediction interval for X=10, you just have to include the point (10, NA) in your data, and set a trace monitor for the y-value.

I use JAGS from R with the rjags package. Below is a complete worked example that begins by simulating the data, then adds some extra x-values to the data, specifies and runs the linear model in JAGS via rjags, and summarizes the results. Y[101:105] contains draws from the posterior prediction intervals for X[101:105]. Notice that Y[1:100] just contains the y-values for X[1:100]. These are the observed data that we fed to the model, and they never change as the model updates.

library(rjags)
# Simulate data (100 observations)
my.data <- as.data.frame(matrix(data=NA, nrow=100, ncol=2))
names(my.data) <- c("X", "Y")
# the linear model will predict Y based on the covariate X

my.data$X <- runif(100) # values for the covariate
int <- 2     # specify the true intercept
slope <- 1   # specify the true slope
sigma <- .5   # specify the true residual standard deviation
my.data$Y <- rnorm(100, slope*my.data$X+int, sigma)  # Simulate the data

#### Extra data for prediction of unknown Y-values from known X-values
y.predict <- as.data.frame(matrix(data=NA, nrow=5, ncol=2))
names(y.predict) <- c("X", "Y")
y.predict$X <- c(-1, 0, 1.3, 2, 7)

mydata <- rbind(my.data, y.predict)


set.seed(333)
setwd(INSERT YOUR WORKING DIRECTORY HERE)
sink("mymodel.txt")
cat("model{

    # Priors

    int ~ dnorm(0, .001)
    slope ~ dnorm(0, .001)
    tau <- 1/(sigma * sigma)
    sigma ~ dunif(0,10) 

    # Model structure

    for(i in 1:R){
    Y[i] ~ dnorm(m[i],tau)
    m[i] <- int + slope * X[i]
    }
    }", fill=TRUE)
sink()
jags.data <- list(R=dim(mydata)[1], X=mydata$X, Y=mydata$Y)

inits <- function(){list(int=rnorm(1, 0, 5), slope=rnorm(1,0,5),
                         sigma=runif(1,0,10))}

params <- c("Y", "int", "slope", "sigma")

nc <- 3
n.adapt <-1000
n.burn <- 1000
n.iter <- 10000
thin <- 10
my.model <- jags.model('mymodel.txt', data = jags.data, inits=inits, n.chains=nc, n.adapt=n.adapt)
update(my.model, n.burn)
my.model_samples <- coda.samples(my.model,params,n.iter=n.iter, thin=thin)
summary(my.model_samples)
Morelos answered 15/11, 2015 at 17:3 Comment(2)
What if you have half a million observations to predict? Is there a way to do this "the easy way" without it taking a year?Pollywog
You'll be better off doing it "the hard way", which isn't all that hard. If you're using R, you can vectorize the prediction over the half million points, which will be very fast, and wrap that inside of a loop that iterates over the posterior iteration of the MCMC chain.Morelos

© 2022 - 2024 — McMap. All rights reserved.