I am comparing GLM output from R and SAS of a model with a Gamma distribution. The point estimations are identical, but they have different estimation of the standard error and therefore different p-values.
Does anyone know why? I am wondering if R and SAS uses different methods to estimate the standard errors? Maybe MLE vs. method of moments?
R sample code
set.seed(2)
test = data.table(y = rnorm(100, 1000, 100), x1 = rnorm(100, 50, 20), x2 = rgamma(100, 0.01))
model = summary(glm(formula = y ~ x1+x2 , family = Gamma(link = "log"), data = test))
Using the same data generated here I used the following code to run a model in SAS:
proc genmod data= test_data;
model y = x1 x2 /link= log dist= gamma;
run;
The output from R is as following:
Call:
glm(formula = y ~ x1 + x2, family = Gamma(link = "log"), data = test)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.26213 -0.08456 -0.01033 0.08364 0.20878
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.9210757 0.0324674 213.170 <2e-16 ***
x1 -0.0003371 0.0005985 -0.563 0.575
x2 0.0234097 0.0627251 0.373 0.710
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Gamma family taken to be 0.01376099)
Null deviance: 1.3498 on 99 degrees of freedom
Residual deviance: 1.3436 on 97 degrees of freedom
AIC: 1240.6
Number of Fisher Scoring iterations: 4
1/model$dispersion
, there is a difference there, hence in se's. In r it is calculated bysum((object$weights * object$residuals^2)[object$weights > 0])/ object$df.residual
(look atsummary.glm
) Maybe try and find how SAS calculates it. – Hellas