I have a big data frame including abundance of bats per year, and I would like to model the population trend over those years in R. I need to include year additionally as a random effect, because my data points aren't independent as bat population one year directly effects the population of the next year (if there are 10 bats one year they will likely be alive the next year). I have a big dataset, however have used the group_by() function to create a simpler dataframe shown below - example of dataframe lay out. In my bigger dataset I also have month and day.
year
total individuals
2000
39
2001
84
etc.
etc.
Here is the model I wish to use with lme4.
BLE_glm6 <- glm(total_indv ~ year + (year|year), data = BLE_total, family = poisson)
Because year is the predictor variable, when adding year again R does not like it because it's highly correlated. So I am wondering, how do I account for the individuals one year directly affecting the number of individuals then next year if I can't include year as a random effect within the model?
There are a few possibilities. The most obvious would be to fit a Poisson model with the number of bats in the previous year as an offset:
## set up lagged variable
BLE_total <- transform(BLE_total,
total_indv_prev = c(NA, total_indv[-length(total_indv)])
## or use dplyr::lag() if you like tidyverse
glm(total_indv ~ year + offset(log(total_indv_prev)), data = BLE_total,
family = poisson)
This will fit the model
mu = total_indv_prev*exp(beta_0 + beta_1*year)
total_indv ~ Poisson(mu)
i.e. exp(beta_0 + beta_1*year) will be the predicted ratio between the current and previous year. (See here for further explanation of the log-offset in a Poisson model.)
If you want year as a random effect (sorry, read the question too fast), then
library(lme4)
glmer(total_indv ~ offset(log(total_indv_prev)) + (1|year), ...)
I have a 3-stage stratified sampling design of a national survey. I have code in Stata to do the weighting, but I struggle to reproduce it with the R survey package.
Sampling design is the following. The sampling universe is stratified into 49 strata. Within each stratum, sampling is done in three stages. (1) PPS selection of a precinct. (2) Random systematic selection of a household using random route technique, basically. (3) Random systematic selection of a respondent within a household using some Kish technique modification.
There is a Stata weighting code that is assumed to perform well:
svyset precinct [pweight=indwt], strata(strt) fpc(npsu) singleunit(certainty) || qnum, fpc(nhh) || _n, fpc(nhhm)
Here, precinct are precinct numbers, strt are strata codes. Population sizes used for FPC are npsu - number of PSUs (i.e. precincts) per stratum, nhh - numbers of households in PSUs, nhhm - number of eligible members in a household. qnum - questionnaire unique numbers, which are the same for both selected households and respondents.
I try to reproduce it with the following R code.
library(survey)
options("survey.lonely.psu" = "certainty")
svy_data <- svydesign(ids = ~precinct + qnum,
strata = ~strt,
weights = ~indwt,
fpc = ~npsu + nhh,
data = data)
I can't do fpc = ~npsu + nhh + nhhm, because then I get an error:
Error in popsize < sampsize : non-conformable arrays.
Resulting confidence intervals through confint(svymean(...)) in R don't match Stata confidence intervals through tabout ado package. They are close, but shifted a bit in R.
My assumption is that I should do something that Stata's _n term does, and get a 3-stage design instead of a 2-stage one. How could I do that?
Or is there anything else I can try to improve in my R code to match Stata?
I am trying to fit a survival model with left-truncated data using the survival package however I am unsure of the correct syntax.
Let's say we are measuring the effect of age at when hired (age) and job type (parttime) on duration of employment of doctors in public health clinics. Whether the doctor quit or was censored is indicated by the censor variable (0 for quittting, 1 for censoring). This behaviour was measured in an 18-month window. Time to either quit or censoring is indicated by two variables, entry (start time) and exit(stop time) indicating how long, in years, the doctor was employed at the clinic. If doctors commenced employment after the window 'opened' their entry time is set to 0. If they commenced employment prior to the window 'opening' their entry time represents how long they had already been employed in that position when the window 'opened', and their exit time is how long from when they were initially hired they either quit or were censored by the window 'closing'. We also postulate a two-way interaction between age and duration of employment (exit).
This is the toy data set. It is much smaller than a normal dataset would be, so the estimates themselves are not as important as whether the syntax and variables included (using the survival package in R) are correct, given the structure of the data. The toy data has the exact same structure as a dataset discussed in Chapter 15 of Singer and Willet's Applied Longitudinal Data Analysis. I have tried to match the results they report, without success. There is not a lot of explicit information online how to conduct survival analyses on left-truncated data in R, and the website that provides code for the book (here) does not provide R code for the chapter in question. The methods for modeling time-varying covariates and interaction effects are quite complex in R and I just wonder if I am missing something important.
Here is the toy data
id <- 1:40
entry <- c(2.3,2.5,2.5,1.2,3.5,3.1,2.5,2.5,1.5,2.5,1.4,1.6,3.5,1.5,2.5,2.5,3.5,2.5,2.5,0.5,rep(0,20))
exit <- c(5.0,5.2,5.2,3.9,4.0,3.6,4.0,3.0,4.2,4.0,2.9,4.3,6.2,4.2,3.0,3.9,4.1,4.0,3.0,2.0,0.2,1.2,0.6,1.9,1.7,1.1,0.2,2.2,0.8,1.9,1.2,2.3,2.2,0.2,1.7,1.0,0.6,0.2,1.1,1.3)
censor <- c(1,1,1,1,0,0,0,0,1,0,0,1,1,1,0,0,0,0,0,0,rep(1,20))
parttime <- c(1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0)
age <- c(34,28,29,38,33,33,32,28,40,30,29,34,31,33,28,29,29,31,29,29,30,37,33,38,34,37,37,40,29,38 ,49,32,30,27,35,34,35,30,35,34)
doctors <- data.frame(id,entry,exit,censor,parttime,age)
Now for the model.
coxph(Surv(entry, exit, 1-censor) ~ parttime + age + age:exit, data = doctors)
Is this the correct way to specify the model given the structure of the data and what we want to know? An answer here suggests it is correct, but I am not sure whether, for example, the interaction variable is correctly specified.
As is often the case, it's not until I post a question about a problem on SO that I work out how to do it myself. If there is an interaction with time predictor we need to convert the dataset into a count process, person period format (i.e. a long form). This is because each participant needs an interval that tracks their status with respect to the event for every time point that the event occurred to anyone else in the data set, up to the point when they exited the study.
First let's make an event variable
doctors$event <- 1 - doctors$censor
Before we run the cox model we need to use the survSplit function in the survival package. To do this we need to make a vector of all the time points when an event occurred
cutPoints <- order(unique(doctors$exit[doctors$event == 1]))
Now we can pass this into the survSplit function to create a new dataset...
docNew <- survSplit(Surv(entry, exit, event)~.,
data = doctors,
cut = cutPoints,
end = "exit")
... which we then run our model on
coxph(Surv(entry,exit,event) ~ parttime + age + age:exit, data = docNew)
Voila!
Until recently I used SPSS for my statistics, but since I am not in University any more, I am changing to R. Things are going well, but I can't seem to replicate the results I obtained for repeated effect LMM in SPSS. I did find some treads here which seemed relevant, but those didn't solve my issues.
This is the SPSS script I am trying to replicate in R
MIXED TriDen_L BY Campaign Watering Heating
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1)
SINGULAR(0.000000000001) HCONVERGE(0,
ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED=Campaign Watering Heating Campaign*Watering Campaign*Heating
Watering*Heating Campaign*Watering*Heating | SSTYPE(3)
/METHOD=REML
/PRINT=TESTCOV
/RANDOM=Genotype | SUBJECT(Plant_id) COVTYPE(AD1)
/REPEATED=Week | SUBJECT(Plant_id) COVTYPE(AD1)
/SAVE=PRED RESID
Using the lme4 package in R I have tried:
lmm <- lmer(lnTriNU ~ Campaign + Watering + Heating + Campaign*Watering
+ Campaign*Heating + Watering*Heating + Campaign*Watering*Heating
+ (1|Genotype) + (1|Week:Plant_id), pg)
But this -and the other options I have tried for the random part- keep producing an error:
Error: number of levels of each grouping factor must be < number of observations
Obviously in SPSS everything is fine. I am suspecting I am not correctly modelling the repeated effect? Also saving predicted and residual values is not yet straightforward for me...
I hope anyone can point me in the right direction.
You probably need to take out either Week or Plant_id, as I think you have as many values for either variable as you have cases. You can nest observations within a larger unit if you add a variable to model time. I am not familiar with SPSS, but if your time variable is Week (i.e., if week has a value of 1 for the first observation, 2 for the second etc.), then it should not be a grouping factor but a random effect in the model. Something like <snip> week + (1 + week|Plant_id).
k.
Is Plant_id nested within Genotype, and Week indicate different measure points? If so, I assume that following formula leads to the required result:
lmm <- lmer(lnTriNU ~ Campaign + Watering + Heating + Campaign*Watering
+ Campaign*Heating + Watering*Heating + Campaign*Watering*Heating
+ (1+Week|Genotype/Plant_id), pg)
Also saving predicted and residual values is not yet straightforward for me...
Do you mean "computing" by "saving"? In R, all relevant information are in the returned object, and accessible through functions like residuals() or predict() etc., called on the saved object (in your case, residuals(lmm)).
Note that, by default, lmer does not use AD1-covtype.