Use of svyglm and svydesign with R for multistage stratified cluster design - r

I have a complicated data set which was made by a multistage stratified cluster design. I had originally analysed this using glm, however now realise that I have to use svyglm. I'm not quite sure about how is best to model the data utilising svyglm. I was wondering if anyone could help shed some light.
I am attempting to see the effect that a variety of covariates taken at time 1 have on a binary outcome taken at time 2.
The sampling strategy was as follows: state -> urban/rural -> district -> subdistrict -> village. Within each village, individuals were randomly selected, with each of these having an id (uniqid).
I have a variable in the df for each of these stages of the sampling strategy. I also have the following variables: outcome, age, sex, income, marital_status, urban_or_rural_area, uniqid, weights. The formula that I want for my regression equation is outcome ~ age + sex + income + marital_status + urban_or_rural_area . Weights are coded by the weights variable. I had set the family to binomial(link = logit).
If anyone has any idea how such an approach could be coded in R with svyglm I would be most appreciative. I'm quite confused as to what should be inputted as ID, fpc and nest. Do I have to specify all levels of the stratified design or just some?
Any direction, or resources which explain this well would be massively appreciated.

You don't really give enough information about the design: which of the geographical units are strata and which are clusters. For example, my guess is that you sample both urban and rural in all states, and you don't sample all villages, but I don't know whether you sample all districts or subdistricts. I also don't know whether your overall sampling fraction is large or small (so whether the with-replacement approximation is ok)
Let's pretend you sample just some districts, so districts are your Primary Sampling Units, and that the overall sampling fraction of people is small. The design command is
your_design <- svydesign(id=~district, weights=~weights,
strata=~interaction(state, urban_rural,drop=TRUE),
data=your_data_frame)
That is, the strata are combinations of state and urban/rural and any combinations that aren't in your data set don't exist in the population (maybe some states are all-rural or all-urban). Within each stratum you have districts, and only some of these appear in the sample. In your geographical hierarchy, districts are then the first level that is sampled rather than exhaustively enumerated.
You don't need fpc unless you want to specify the full multistage design without replacement.
The nest option is not about how the survey was done but is about how variables are coded. The US National Center for Health Statistics (bless their hearts) set up a lot of designs that have many strata and two primary sampling units per stratum. They call these primary sampling units 1 and 2; that is, they reuse the names 1 and 2 in every stratum. The svydesign function is set up to expect different sampling unit names in different strata, and to verify that each sampling unit name appears in just one stratum, as a check against data errors. This check has to be disabled for NCHS surveys and perhaps some others that also reuse sampling unit names. You can always leave out the nest option at first; svydesign will tell you if it might be needed.
Finally, the models:
svyglm(outcome ~ age + sex + income + marital_status + urban_or_rural_area,
design=your_design, family=quasibinomial)
Using binomial or quasibinomial will give identical answers, but using binomial will give you a harmless warning about non-integer weights. If you use quasibinomial, the harmless warning is suppressed.

Related

GLMM: Needing overall advice on selecting model terms for glmm modelling in R

I would like to create a model to understand how habitat type affects the abundance of bats found, however I am struggling to understand which terms I should include. I wish to use lme4 to carry out a glmm model, I have chosen glmm as the distribution is poisson - you can't have half a bat, and also distribution is left skewed - lots of single bats.
My dataset is very big and is comprised of abundance counts recorded by an individual on a bat survey (bat survey number is not included as it's public data). My data set includes abundance, year, month, day, environmental variables (temp, humidity, etc.), recorded_habitat, surrounding_habitat, latitude and longitude, and is structured like the set shown below. P.S Occurrence is an anonymous recording made by an observer at a set location, at a location a number of bats will be recorded - it's not relevant as it's from a greater dataset.
occurrence
abundance
latitude
longitude
year
month
day
(environmental variables
3456
45
53.56
3.45
2000
5
3
34.6
surrounding_hab
recorded_hab
A
B
Recorded habitat and surrounding habitat range in letters (A-I) corresponding to a habitat type. Also, the table is split as it wouldn't fit in the box.
These models shown below are the models I think are a good choice.
rhab1 <- glmer(individual_count ~ recorded_hab + (1|year) + latitude + longitude + sun_duration2, family = poisson, data = BLE)
summary(rhab1)
rhab2 <- glmer(individual_count ~ surrounding_hab + (1|year) + latitude + longitude + sun_duration2, family = poisson, data = BLE)
summary(rhab2)
I'll now explain my questions in regards to the models I have chosen, with my current thinking/justification.
Firstly, I am confused about the mix of categorical and numeric variables, is it wise to include the environmental variables as they are numeric? My current thinking is scaling the environmental variables allowed the model to converge so including them is okay?
Secondly, I am confused about the mix of spatial and temporal variables, primarily if I should include temporal variables as the predictor is a temporal variable. I'd like to include year as a random variable as bat populations from one year directly affect bat populations the next year, and also latitude and longitude, does this seem wise?
I am also unsure if latitude and longitude should be random? The confusion arises because latitude and longitude do have some effect on the land use.
Additionally, is it wise to include recorded_habitat and surrounding_habitat in the same model? When I have tried this is produces a massive output with a huge correlation matrix, so I'm thinking I should run two models (year ~ recorded_hab) and (year ~ surrounding_hab) then discuss them separately - hence the two models.
Sorry this question is so broad! Any help or thinking is appreciated - including data restructuring or model term choice. I'm also new to stack overflow so please do advise on question lay out/rules etc if there are glaringly obvious mistakes.

MatchIT function returns equal continuous variable but unequal categorical variable

I used the MatchIt function to derive a 1:4 ratio treated:untreated dataset, attempting to achieve similar average age and gender frequency.
I have a small treated group (n = 44) and a much larger control group (n= 980). To reduce the number of the control group and exclude age and gender as confounders, I attempted to use the MatchIt function to create a control group of 176 with an average age and gender balance similar to the treated group.
m.out <- matchit(Treated ~ AGE + SEX, data = d,
method = "optimal",
ratio = 4)
The summary of the output is:
Summary of balance for matched data:
Means Treated Means Control SD Control Mean Diff eQQ Med
distance 0.0602 0.0603 0.0250 -0.0001 0
AGE 57.5227 58.4034 7.9385 -0.8807 1
SEXF 0.4318 0.1477 0.3558 0.2841 0
SEXM 0.5682 0.8523 0.3558 -0.2841 0
The Age variable worked great - it is not significantly different but the gender seemed off (85% male in control vs 57% in treated) so I performed a chi-square test on the treated ~ gender data. It showed a highly significant difference in gender:
chisq <- with(m.data, chisq.test(SEX, Treated))
data: SEX and Treated
X-squared = 15.758, df = 1, p-value = 7.199e-05
How do I account for the difference here? Is my problem with the MatchIT function (incorrect method?) or it has worked but I've applied the chi-square to the incorrect problem?
There are many reasons why propensity score matching didn't "work" in this case. In general, it isn't guaranteed to balance covariates in small samples; the theoretical properties of the propensity score apply in large samples and with the correct propensity score (and yours is almost certainly not correct).
Some more specific reasons could be that when doing 4:1 matching, so many controls units that are far from treated units are matched to your treated units. You could see if matching fewer control units fixes this by changing the ratio. It could be that optimal matching is not a good matching method to use. Optimal matching finds optimal pairs based on the propensity score, but you want balance on the covariates, not the propensity score. You could try genetic matching (i.e., using method = "genetic"), though this will probably fail as well (it's like using a hammer on a thumb-tack).
One recommendation is to use the designmatch package to perform cardinality matching, which allows you to impose balance constraints and perform the matching without having to estimate a propensity score. With only two covariates, though, exact matching on gender and nearest-neighbor matching on age should do a fairly good job. Set exact = d$gender and distance = d$age in matchit() and see if that works better. You don't need a propensity score for this problem.
Finally, don't use hypothesis tests to assess balance. The balance output is enough. DOn't stop trying to find good matches until your balance can't improve any more. See Ho, Imai, King, & Stuart (2007) for more information on this. They are the authors of MatchIt too.
Ho, D. E., Imai, K., King, G., & Stuart, E. A. (2007). Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference. Political Analysis, 15(3), 199–236. https://doi.org/10.1093/pan/mpl013

R: Propensity Score Matching using MatchIt. How to specify desired matching accuracy for different covariates?

I'm rather new to R and especially to the method of matching by propensity scores. My dataset includes two groups of people that differ in whether they were treated or not- unfortunately they also differ significantly in age and disease duration, therefore my wish to match them.
So far this is my code:
set.seed(2208)
mod_match <- matchit(TR ~ age + disease_duration + sex + partner + work + academic,
data = Data_nomiss,
method = "nearest",
caliper = .025)
summary(mod_match)
This code works fine, but I wondered whether there is a possibility to weight the importance of the covariates regarding the accuracy of matching? For me it is crucial that the groups are as close as possible concerning age and disease duration (numeric), whereas the rest of the variables (factors) should also be matched, but for my purposes might differ in means a little more than the first two.
While searching for a solution to my problem I came across the request of this one guy, who had basically the same problem http://r.789695.n4.nabble.com/matchit-can-I-weight-the-parameters-td4633907.html
In this case it was proposed to combine nearest neighbor and exact matching, but transferred to my dataset this leads to an unproportional reduction of my sample. In the end what I'd like to have is some sort of customized matching process focussing on age and disease duration while also involving the last three variables but in a weaker way.
Does anyone happen to have an idea how this could be realized? I'd be really glad to receive any kinds of tips on this matter and thank you for your time!
Unfortunately, MatchIt does not provide this functionality. There were two ways to do this instead of using MatchIt, but they are slightly advanced. Note that neither use propensity scores. The point of propensity score matching is to match on a single number, the propensity score, which makes the matching procedure blind to the original covariates for which balance is desired.
The first is to use the package Matching and include your own weight matrix to Weight.matrix in Match(). You could upweight age and disease duration in the weight matrix.
The second is to use the package designmatch to do cardinality matching, which allows you to specify balance constraints, and it will use optimization to find the largest sample that meets those constraints. In bmatch(), enter your covariates of interest into the mom argument, which also allows you to include specific balance constraints for each variable. You can require stricter balance constraints for age and disease duration.

R matchit on 7 variables with different seeds

I am using the following code to match 2 cohorts (2800 controls, 460 treated) of different patients:
set.seed(99)
m.out <- matchit(treatment ~ gender + age + VarC + diseaseDuration +
pastActivity + activity + country, data = Pat.match,
method = "nearest", ratio = 5, discard = "control",
caliper = 0.1, m.order = "smallest")
After matching, the cohorts are reduced to about 1230 controls vs. 400 treated.
These numbers are similar when I change the seed. However, if I check more accurately (patient ID), the total cohorts for different seeds differ in about 20% of the patients. To be more precise:
set.seed(99) results in a cohort, that has an overlap of only 80% with the resulting cohort of set.seed(27).
And this might have a huge impact on further general models and statistical analyses. Have I overseen something ?
Regards !
Sometimes this occurs when units have the same or very similar propensity scores; I believe MatchIt resolves this with a randomly selected match. I actually disagree with #dash2 that you shouldn't change the seed until you get a result you like. You should perform the matching procedure as many times as you want until you arrive at covariate balance. If your data is balanced and all your treated units are retained (or at least the same ones across matching specifications), then your effect estimation will not vary systematically with your matched set. Just remember that once you have estimated your treatment effect, you can't go back and redo your matching results (which is probably what #dash2 is getting at). But at the matching phase, this is not a concern.
So the computing side of this is that matchit is probably doing something random, even though you haven't e.g. specified m.order="random". What that could be, it's probably easiest to find out by looking through the source code...
The statistical side is really for crossvalidated, not here, but I would suggest:
if any of your results depend non-trivially on the seed, don't trust the results, they are not robust.
in particular - I'm sure you know this but it is worth reiterating - do NOT try different seeds until you get a result you like!

How to structure stratified data for Poisson regression

I'm trying to use R to conduct Poisson regression on some data that I have. The current structure of the data is as follows:
Data is stratified based on three occupations. There are four levels of income in the data. Within each stratum, for each level of income there is
the number of workplace accidents that have occurred, and
the total man months observed.
Here's an example of the setup. The number in parentheses is the total man months observed and the number not in parentheses is the number of workplace accidents.
My question is how do I set up this data and perform a Poisson regression on the effect of income level on the occurrence of workplace accidents? Ideally I would like to adjust for occupation and find out the effect of only income, but as a starting point, I'm not sure how to set it up as a Poisson regression problem at all. I thought about doing something like dividing the number of injuries by the months of observation, but then that gives non-integer values so I assume that's not the right thing to do.
To reiterate, predictor: income level; response variable: workplace accidents.
BTW, it would be very easy to separate the parentheses numbers and put them into their own column, if that would make sense to do.
I'd really appreciate any suggestions on how to set this up. I am sure other statisticians are working with similarly structured data and might like to gain some insight as well. Thanks so much!
#thelatemail might be correct in think this to be better suited for stats.stackexchange.com but here is some R code. That data is in wide format and you need to re-structure it to long format. (And you will not want to include the totals columns. After converting the first four columns to a long format where you had 'occupation' and 'level' as factor-class variables, and accident 'counts' and exposure 'months' as numeric columns, you could use this call to glm.
fit <- glm( counts ~ level + occup + offset(log(months)), data=dfrm, family="poisson")
The offset needs to be log()-ed to agree with the logged counts created by the default link function for the poisson-family.
(You cannot really expect us to redo that data entry task, now can you?)

Resources