EDIT
I have not been able to reproduce the results I originally posted. When I got those results the first time, I closed out R, restarted it, and ran the whole thing again just to make sure, and I got the same results again. What appears below is copied exactly from my R console. However, I just tried the code a third (and fourth and fifth) time and it is not working. I'm leaving my original answer up just in case I was onto something and didn't realize it and it can be of use to someone else, but the advice below does not appear to work (at least not consistently).
The problem does seem to lie in the C code. When I open up the ideal
function and run it line by line, all.equal
returns TRUE for every input in this line of code:
output <- .C("IDEAL", PACKAGE = .package.Name, as.integer(n),
as.integer(m), as.integer(d), as.double(yToC), as.integer(maxiter),
as.integer(thin), as.integer(impute), as.integer(mda),
as.double(xp), as.double(xpv), as.double(bp), as.double(bpv),
as.double(xstart), as.double(bstart), xoutput = as.double(rep(0,
n * d * numrec)), boutput = as.double(0), as.integer(burnin),
as.integer(usefile), as.integer(store.item), as.character(file),
as.integer(verbose))
However, when I run the above code multiple times, output$xoutput
returns slightly different results each time, even if I call set.seed(42)
immediately before each run.
sessionInfo()
R version 2.15.2 (2012-10-26)
Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] grid splines stats graphics grDevices utils datasets methods base
other attached packages:
[1] SoDA_1.0-5 pscl_1.04.4 vcd_1.2-13 colorspace_1.2-0 gam_1.06.2 coda_0.16-1 lattice_0.20-10 mvtnorm_0.9-9994
[9] MASS_7.3-22
loaded via a namespace (and not attached):
[1] tools_2.15.2
ORIGINAL ANSWER
The ideal
function has a startvals
argument. The default for that argument is "eigen". For your call to set.seed
to have an effect, you need to change that argument to "random". Here's what you already tried:
run1 <- simulationResult(
ideal(s109,
normalize=TRUE,
maxiter = 500,
thin = 10,
burnin = 0,
startvals = "eigen"),
seed = 42)
run2 <- simulationResult(
ideal(s109,
normalize=TRUE,
maxiter = 500,
thin = 10,
burnin = 0,
startvals = "eigen"),
seed = 42)
all.equal(run1@firstState, run2@firstState)
[1] TRUE
all.equal(run1@result$xbar, run2@result$xbar)
[1] "Mean relative difference: 0.01832379"
And here's the same thing with startvals
set to "random":
run1 <- simulationResult(
ideal(s109,
normalize=TRUE,
maxiter = 500,
thin = 10,
burnin = 0,
startvals = "random"),
seed = 42)
run2 <- simulationResult(
ideal(s109,
normalize=TRUE,
maxiter = 500,
thin = 10,
burnin = 0,
startvals = "random"),
seed = 42)
all.equal(run1@firstState, run2@firstState)
[1] TRUE
all.equal(run1@result$xbar, run2@result$xbar)
[1] TRUE
As far as I can see, the need to set startvals
to "random" in order to get replicable results is not clearly indicated in the package documentation. I had to play around with it for a while before I figured it out.