Bootstrapping to compare two groups
Asked Answered
H

3

6

In the following code I use bootstrapping to calculate the C.I. and the p-value under the null hypothesis that two different fertilizers applied to tomato plants have no effect in plants yields (and the alternative being that the "improved" fertilizer is better). The first random sample (x) comes from plants where a standard fertilizer has been used, while an "improved" one has been used in the plants where the second sample (y) comes from.

x <- c(11.4,25.3,29.9,16.5,21.1)
y <- c(23.7,26.6,28.5,14.2,17.9,24.3)
total <- c(x,y)
library(boot)
diff <- function(x,i) mean(x[i[6:11]]) - mean(x[i[1:5]])
b <- boot(total, diff, R = 10000)

ci <- boot.ci(b)
p.value <- sum(b$t>=b$t0)/b$R

What I don't like about the code above is that resampling is done as if there was only one sample of 11 values (separating the first 5 as belonging to sample x leaving the rest to sample y). Could you show me how this code should be modified in order to draw resamples of size 5 with replacement from the first sample and separate resamples of size 6 from the second sample, so that bootstrap resampling would mimic the “separate samples” design that produced the original data?

Hierarch answered 1/9, 2010 at 7:6 Comment(0)
D
5

EDIT2 : Hack deleted as it was a wrong solution. Instead one has to use the argument strata of the boot function :

total <- c(x,y)
id <- as.factor(c(rep("x",length(x)),rep("y",length(y))))
b <- boot(total, diff, strata=id, R = 10000)
...

Be aware you're not going to get even close to a correct estimate of your p.value :

x <- c(1.4,2.3,2.9,1.5,1.1)
y <- c(23.7,26.6,28.5,14.2,17.9,24.3)

total <- c(x,y)

b <- boot(total, diff, strata=id, R = 10000)
ci <- boot.ci(b)
p.value <- sum(b$t>=b$t0)/b$R
> p.value
[1] 0.5162

How would you explain a p-value of 0.51 for two samples where all values of the second are higher than the highest value of the first?

The above code is fine to get a -biased- estimate of the confidence interval, but the significance testing about the difference should be done by permutation over the complete dataset.

Denunciation answered 1/9, 2010 at 7:30 Comment(7)
Thank you! Regarding your question "Why would I want to do that", check out the box entitled "BOOTSTRAP FOR COMPARING TWO POPULATIONS" (and if you want comment on that) on the bottom of page 18 here bcs.whfreeman.com/ips5e/content/cat_080/pdf/moore14.pdfHierarch
My main problem was how should diff.calc be defined. And still I'm surprised not to see the second argument i inside it!Hierarch
@gd047 : I guessed something like that already from your question on statexchange. Note they speak solely of a confidence interval, and say nothing about a p-value there. My example shows you why.Denunciation
@gd047 : it's a hack : I have to specify the i as a function argument to be able to use the boot function, but I do the resampling within the function. So the only thing the boot function does, is looping and returning a boot object. In fact, I just remembered there's an argument "strata" to do just that. solution edited.Denunciation
@Joris Meys: I am also going to use it only for the confidence interval (not the p-value). You said it's a biased c.i. estimate. What would you suggest for an unbiase one? P.S. Your edited version seems ok but the original "hack" produced an incorrect "original" value (b$t0) - different each time the boot was run.Hierarch
@gd047 : I realized. That's because it uses the sampling also to determine the original value. I'm going to delete the hack. I assumed you'd use the p.value as you calculate it in your original code as well. So I'll just leave the comment about it for other people.Denunciation
@gd047 : getting an unbiased estimate is rather impossible on small datasets. It's not a bad thing, but one should be aware of the fact that the estimate is biased.Denunciation
B
2

Following John, I think the appropriate way to use bootstrap to test if the sums of these two different populations are significantly different is as follows:

x <- c(1.4,2.3,2.9,1.5,1.1)
y <- c(23.7,26.6,28.5,14.2,17.9,24.3)


b_x <- boot(x, sum, R = 10000)
b_y <- boot(y, sum, R = 10000)

z<-(b_x$t0-b_y$t0)/sqrt(var(b_x$t[,1])+var(b_y$t[,1]))
pnorm(z)

So we can clearly reject the null that they are the same population. I may have missed a degree of freedom adjustment, I am not sure how bootstrapping works in that regard, but such an adjustment will not change your results drastically.

Binion answered 4/6, 2013 at 2:26 Comment(0)
T
1

While the actual soil beds could be considered a stratified variable in some instances this is not one of them. You only have the one manipulation, between the groups of plants. Therefore, your null hypothesis is that they really do come from the exact same population. Treating the items as if they're from a single set of 11 samples is the correct way to bootstrap in this case.

If you have two plots, and in each plot tried the different fertilizers over different seasons in a counterbalanced fashion then the plots would be statified samples and you'd want to treat them as such. But that isn't the case here.

Towardly answered 1/9, 2010 at 16:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.