How to cope with a singular fit in a linear mixed model (lme4)?
Asked Answered
B

2

12

I am running several linear mixed models for an study about birds with the variable nest as a random variable. The thing is that in some of these models I get what is called 'singular fit': my nest random variable has a variance and st error of 0.00.

Some background: I am working with wild birds to see the effect of living in noisy environments on some oxidative stress parameters. For this, we took a blood sample for each of the nestlings of each nest to do the laboratory stuff. Because of the limited blood sample, some oxidative stress parameters couldn't be measured for every nestling.

model <- lmer(antioxidant_capacity~age+sex+clutch+zone+(1|nestID),
 data=data, contrasts=list(sex=contr.sum, zon=contr.sum, clutch=contr.sum))

Then I get:

singular fit

This is the table:

REML criterion at convergence: 974.3

Scaled residuals: 
 Min       1Q   Median       3Q      Max 
-2.72237 -0.61737  0.06171  0.69429  2.88008 

Random effects:
Groups   Name        Variance     Std.Dev.
nestID (Intercept)      0          0.00   
Residual               363        19.05   
Number of obs: 114, groups:  nido_mod, 46

Fixed effects:
        Estimate      Std. Error  df        t value Pr(>|t|)    
(Intercept) 294.5970    36.8036  109.0000   8.005   1.41e-12 ***
age          -0.2959     3.0418  109.0000  -0.097   0.922685    
clutch1      -0.5242     2.0940  109.0000  -0.250   0.802804    
sex1          2.3167    1.8286 109.0000     1.267   0.207885    
zone1         6.2274     1.7958  109.0000   3.468   0.000752 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation of Fixed Effects:
      (Intr) age  clutch1 sex1 
age  -0.999                     
clutch1   0.474 -0.465              
sex1      0.060 -0.054 -0.106       
zone1    -0.057  0.061 -0.022  0.058
convergence code: 0
singular fit

I have read about singularity problems and if I have understood well, the singularity is related to overfitting. Could this be due to that for some response variables I have nests with only one nestling while there are nest with more nestlings? How can I solve this? Any recommendation?

Thank you, so much.

Basrelief answered 8/2, 2019 at 17:28 Comment(0)
A
27

In lmer, a singular fit could be caused by collinearity in fixed effects, as in any other linear model. That would need you to revise your model by removing terms. But in lmer, that (or a "boundary (singular) fit" warning) can also be also triggered in quite simple models when a random effect variance is estimated very near zero and (very loosely) the data is not sufficiently informative to drag the estimate away from the zero starting value.

The formal answer is broadly similar either way; drop terms that estimate as zero. And that remains sensible at least until you know which term is causing the problem. But there are times when a negligible variance is reasonably likely but you'd like to retain it in the model; for example because you're quite deliberately looking for intervals on possibly small variances or maybe doing multiple similar experiments and would prefer to extract all the variances consistently. If you're sure of what's going on, you can suppress these warnings via lmerControl, which can be set not to use the relevant tests. For example, you can include

control=lmerControl(check.conv.singular = .makeCC(action = "ignore",  tol = 1e-4))

in your lmer call. That leaves in the default tolerance (which makeCC needs) but suppresses the singular fit test. (The default is action="warning", which runs the test and issues the warning).

Anthocyanin answered 26/3, 2019 at 22:37 Comment(2)
I have a similar issue - estimation of a random effect variance to be close to zero, causing a singular fit warning. My random effect is there specifically to control for non-independence of some data points from the same individual, i.e. Individual_ID is my random effect. If I was to remove this random effect from the model, would this not be tantamount to pseudoreplication? Would this be grounds to leave the random effect in the model despite the estimate of ~zero?Leishaleishmania
Just a bit of intuition, and not necessarily an end-all answer. @Roasty247, if the variance term is estimated near zero, it more or less tells us that dependency structure can be ignored. It's scary to say that in a case like yours, because you know there's a specific dependency structure. But the estimate of near zero effectively tells you the numbers actually don't show dependency in that way, and for the data at hand, it can be ignored.Appreciative
S
0

Are you actually interested in whether each of the fixed effects in your model has an effect? For example, age or sex may explain some of the variation, but perhaps you could include it as a random effect rather than a fixed effect. Changing it to a random effect (if that is rational) might address the over dispersion issue.

My interpretation of the singularity issue, which certainly could be incorrect, is that each of the combinations of your model only has one observation/measurement. Therefore, you may not have enough observations to include all of those variables as fixed effects.

Stockish answered 21/2, 2019 at 19:5 Comment(1)
That (2nd paragraph) is one cause of a singularity warning in lmer (and in simpler linear models), but lmer also warns on other near-zero variances. The simplest example is a nested experiment in which there are plenty of groups and observations but one of the grouping factors has an estimated zero variance.Anthocyanin

© 2022 - 2024 — McMap. All rights reserved.