How should you use scaled weights with the svydesign() function in the survey package in R? - r

I am using the survey package in R to analyse the "Understanding Society" social survey. The main user guide for the survey specifies (on page 45) that the weights have been scaled to have a mean of 1. When using the svydesign() function, I am passing the weight variable to the weight argument.
In the survey package documentation, under the surveysummary() function, it states:
Note that the design effect will be incorrect if the weights have been rescaled so that they are not reciprocals of sampling probabilities.
Will I therefore get incorrect estimates and/or standard errors when using functions such as svyglm() etc?
This came to my attention because, when using the psrsq() function to get the Pseudo R-Squared of a model, I received the following warning:
Weights appear to be scaled: rsquared may be wrong
Any help would be greatly appreciated! Thanks!

No, you don't need to worry
The warning is only about design effect estimation (which most people don't want to do), and only about without-replacement design effects (DEFF rather than DEFT). Most people don't need to do design-effect estimation, they just need estimates and standard errors. These are fine; there is no problem.
If you want to estimate the design effects, R needs to estimate the standard errors (which is fine) and also estimate what the standard errors would be under simple random sampling without replacement, with the same sample size. That second part is the problem: working out the variance under SRSWoR requires knowing the population size. If you have scaled the weights, R can no longer work out the population size.
If you do need design effects (eg, to do a power calculation for another survey), you can still get the DEFT design effects that compare to simple random sampling with replacement. It's only if you want design effects compared to simple random sampling without replacement that you need to worry about the scaling of weights. Very few people are in that situation.
As a final note surveysummary isn't a function, it's a help page.

Related

Estimating Robust Standard Errors from Covariate Balanced Propensity Score Output

I'm using the Covariate Balancing Propensity Score (CBPS) package and I want to estimate robust standard errors for my ATT results that incorporate the weights. The MatchIt and twang tutorials both recommend using the survey package to incorporate weights into the estimate of robust standard errors, and it seems to work:
design.CBPS <- svydesign(ids=~1, weights=CBPS.object$weights, data=SUCCESS_All.01)
SE <- svyglm(dv ~ treatment, design = design.CBPS)
Additionally, the survey SEs are substantially different from the default lm() way of estimating coefficient and SE provided by the CBPS package. For those more familiar with either the CPBS or survey packages, is here any reason why this would be inappropriate or violate some assumption of the CBPS method? I don't see anything the CBPS documentation about how to best estimate standard error so that's why I'm slightly concerned.
Sandwich (robust) standard errors are the most commonly use standard errors after propensity score weighting (including CBPS). For the ATE, they are known to be conservative (too large), and for the ATT, they can be either too large or too small. For parametric methods like CBPS, it is possible to use M-estimation to account for both the estimation of the propensity scores and the outcome model, but this is fairly complicated, especially for specialized models like CBPS.
The alternative is to use the bootstrap, where you bootstrap both the propensity score estimation and estimation of the treatment effect. The WeightIt documentation contains an example of how to do bootstrapping to estimate the confidence interval around a treatment effect estimate.
Using the survey package is one way to get robust standard errors, but there are other packages you can use, such as the sandwich package as recommended in the MatchIt documentation. Under no circumstance should you use or even consider the usual lm() standard errors; these are completely inaccurate for inverse probability weights. The AsyVar() function in CBPS seems like it should provide valid standard errors, but in my experience these are also wildly inaccurate (compared to a bootstrap); the function doesn't even get the treatment effect right.
I recommend you use a bootstrap. It may take some time (you ideally want around 1000 bootstrap replications), but these standard errors will be the most accurate.

meta-analysis multiple outcome variables

As you might be able to tell from the sample of my dataset, it contains a lot of dependency, with each study providing multiple outcomes for the construct I am looking at. I was planning on using the metacor library because I only have information about the sample size but not variance. However, all methods I came across to that deal with dependency such as the package rubometa use variance (I know some people average the effect size for the study but I read that tends to produce larger error rates). Do you know if there is an equivalent package that uses only sample size or is it mathematically impossible to determine the weights without it?
Please note that I am a student, no expert.
You could use the escalc function of the metafor package to calculate variances for each effect size. In the case of correlations, it only needs the raw correlation coefficients and the corresponding sample sizes.
See section "Outcome Measures for Variable Association" # https://www.rdocumentation.org/packages/metafor/versions/2.1-0/topics/escalc

Adjusting priors in the package BayesFactor in R

I have some pilot data that I should be able to exploit to adjust the prior in a Bayes t-test on a newer dataset.
I've been performing Bayes t-tests using the default settings via the package BayesFactor in R. Can anyone shed some light on how exactly I can go about adjusting the prior for such a test?
Additionally, what do I need from the pilot data to make this happen? I suspect an effect size?
Here's an example of how to employ the Bayes t-test using the default settings:
ttestBF(x = df1$Value, df2$Value, paired = TRUE)
Thanks for your time.
For reference see Rouder et al. 2009. The BayesFactor package in R uses a JZS prior. See the explanation in the documentation of the ttestBF function:
A noninformative Jeffreys prior is placed on the variance of the normal population, while a Cauchy prior is placed on the standardized effect size. The rscale argument controls the scale of the prior distribution, with rscale=1 yielding a standard Cauchy prior. See the references below for more details.
For the rscale argument, several named values are recognized: "medium", "wide", and "ultrawide". These correspond to r scale values of sqrt(2)/2, 1, and sqrt(2) respectively.
Then in the paper it is said:
For both JZS and scaled-information priors, as r
is increased, the Bayes factor provides increased support for the null.
Which basically means, that if you expect really small effect sizes, you should lower the r parameter.
About your second question:
You should be able to use your piloting data as an estimate for the expected effect size and adjust the priors accordingly.
Be aware that you should not adjust your prior with regard to the observed data (i.e. the new data).
Furthermore, with regards to the BayesFactor package I would assume that the default priors should work pretty well with most data (at least if it's from psychology). See the other references provided in the help function.
I hope this helps a little :) , unfortunately, I cannot tell you whether or how to calculate the best scale for your effect size, as there is also a trade off between effect size and BF for really large sample sizes.

poLCA not stable estimates

I am trying to run a latent class analysis with covariates using polca package. However, every time I run the model, the multinomial logit coefficients result different. I have considered the changes in the order of the classes and I set up a very high number of replications (nrep=1500). However, rerunning the model I obtain different results. For example, I have 3 classes (high, low, medium). No matter the order in which the classes are considered in the estimation, the multinomial model will give me different coefficient for the same combinations after different estimations (such as low vs high and medium vs high). Should I increase further the number of repetitions in order to have stable results? Any idea of why is this happening? I know with the function set.seed() I can replicate the results but I would like to obtain stable estimates to be able to claim the validity of the results. Thank you very much!
From the manual (?poLCA):
As long as probs.start=NULL, each function call will use different
(random) initial starting parameters
you need to use set.seed() or set probs.start in order to get consistent results across function calls.
Actually, if with different starting points you are not converging, you have a data problem.
LCA uses a kind of maximum likelihood estimation. If there is no convergence, you have an under-identification problem: you have too little information to estimate the number of classes that you have. Lower class numbers might run, or you will have to make some a-priori restrictions.
You might wish to read Latent Class and Latent Transition Analysis by Collins. It was a great help for me.

How to deal with heteroscedasticity in OLS with R

I am fitting a standard multiple regression with OLS method. I have 5 predictors (2 continuous and 3 categorical) plus 2 two-way interaction terms. I did regression diagnostics using residuals vs. fitted plot. Heteroscedasticity is quite evident, which is also confirmed by bptest().
I don't know what to do next. First, my dependent variable is reasonably symmetric (I don't think I need to try transformations of my DV). My continuous predictors are also not highly skewed. I want to use weights in lm(); however, how do I know what weights to use?
Is there a way to automatically generate weights for performing weighted least squares? or Are you other ways to go about it?
One obvious way to deal with heteroscedasticity is the estimation of heteroscedasticity consistent standard errors. Most often they are referred to as robust or white standard errors.
You can obtain robust standard errors in R in several ways. The following page describes one possible and simple way to obtain robust standard errors in R:
https://economictheoryblog.com/2016/08/08/robust-standard-errors-in-r
However, sometimes there are more subtle and often more precise ways to deal with heteroscedasticity. For instance, you might encounter grouped data and find yourself in a situation where standard errors are heterogeneous in your dataset, but homogenous within groups (clusters). In this case you might want to apply clustered standard errors. See the following link to calculate clustered standard errors in R:
https://economictheoryblog.com/2016/12/13/clustered-standard-errors-in-r
What is your sample size? I would suggest that you make your standard errors robust to heteroskedasticity, but that you do not worry about heteroskedasticity otherwise. The reason is that with or without heteroskedasticity, your parameter estimates are unbiased (i.e. they are fine as they are). The only thing that is affected (in linear models!) is the variance-covariance matrix, i.e. the standard errors of your parameter estimates will be affected. Unless you only care about prediction, adjusting the standard errors to be robust to heteroskedasticity should be enough.
See e.g. here how to do this in R.
Btw, for your solution with weights (which is not what I would recommend), you may want to look into ?gls from the nlme package.

Resources