I am a complete beginner at R and don't have much time to complete this analysis.
I need to run propensity score matching. I am using RStudio and have
Uploaded my dataset which is called 'R' and was saved on my desktop
Installed and loaded package Matchit
My dataset has the following headings:
BA (my grouping variable of which someone is either on BA or not, 0=off, 1=on),
Then age, sex, timesincediagnosis, TVS, and tscore which are my matching variables.
I have adapted the following code which I have found online
m.nn <- matchit(ba ~ age + sex + timesincediagnosis + TVS + tscore,
data = R, method= " nearest", ratio = 1)
summary(m.nn)
I am getting the following errors:
Error in summary(m.nn) : object 'm.nn' not found
Error in matchit(ba ~ age + sex + timesincediagnosis + TVS + tscore,
data = R, : nearestnot supported.
I would really appreciate any help with why I am getting the errors or how I can change my code.
Thank you!
Credit goes to #MrFlick for noticing this, but the problem is that " nearest" is not an acceptable value to be passed to method. What you want is "nearest" (without the leading space in the string). (Note that the default method is nearest neighbor matching, so you can omit the method argument entirely if this is what you want to do.)
The error print first (Error in summary(m.nn) : object 'm.nn' not found) occurs because R didn't create the m.nn object because of the other error.
Related
I'm trying out various functions in cjoint package but I encountered errors immediately. When using the package's main function amce(), my R console shows the design argument is matched to multiple actual arguments, but I only supplied one design object from the example data appended under the package.
Not sure if the package has been extensively updated or I may have omitted something. It will be appreciated if someone could shed some insights on this.
# install.packages("cjoint")
# library(cjoint)
data("immigrationconjoint")
data("immigrationdesign")
# Run AMCE estimator using the conjoint design given
results <- amce(Chosen_Immigrant ~ Gender + Education + `Language Skills` + `Country of Origin` + Job + `Job Experience` + `Job Plans` + `Reason for Application` + `Prior Entry`, data=immigrationconjoint, cluster=TRUE, respondent.id="CaseID", design=immigrationdesign)
Error in survey::svyglm(formula, design = svydesign, ...) :
formal argument "design" matched by multiple actual arguments
I have run a multiple imputation (m=45, 10 iterations) using MICE and am attempting to fit a series of confirmatory factor analysis and structural equation models on the imputed datasets using the runMI function from semTools. Nearly all of my variables are Likert scales, coded as ordered/ordinal. Here is my code for the first CFA, where mi.res.train is the mice-generated mids object:
ipc_c_model <- '
IPC_C =~ t2IPC6_1 + t2IPC6_2 + t2IPC6_3 + t2IPC6_4 + t2IPC6_5 + t2IPC6_6 + t2IPC6_7'
ipc_c_fit <- runMI(ipc_c_model, mi.res.train, fun = "cfa", ordered = TRUE)
The model does not fit and returns the following error:
Error in slot(value, what) :
no slot of name "internalList" for this object of class "lavaanList"
As far as I can see, the lavaan.mi object that this is supposed to create is a special type of lavaanList object. Any ideas as to what may be causing this error?
Thanks!
Hi all: thanks for this feedback--unfortunately am using a restricted-use dataset so could not share much data without some extra steps. Fortunately, I updated a few packages and the code now appears to be working. I'd tried that previously but apparently missed the lavaan package itself.
I'm trying to carry out covariate balancing using the ebal package. The basic code is:
W1 <- weightit(Conformidad ~ SexoCon + DurPetFiscPrisión1 +
Edad + HojaHistPen + NacionCon + AnteVivos +
TipoAbog + Reincidencia + Habitualidad + Delitos,
data = Suspension1,
method = "ebal", estimand = "ATT")
I then want to check the balance using the summary function:
summary(W1)
This originally worked fine but now I get the error message:
Error in rep(" ", spaces1) : invalid 'times' argument
It's the same dataset and same code, except I changed some of the covariates. But now even when I go back to original covariates I get the same error. Any ideas would be very appreciated!!
I'm the author of WeightIt. That looks like a bug. I'll take a look at it. Are you using the most updated version of WeightIt?
Also, summary() doesn't assess balance. To do that, you need to use cobalt::bal.tab(). summary() summarizes the distribution of the weights, which is less critical than examining balance. bal.tab() displays the effect sample size as well, which is probably the most important statistic produced by summary().
I encountered the same error message. This happens when the treatment variable is coded as factor or character, but not as numeric in weightit.
To make summary() work, you need to use 1 and 0.
I am trying to run a glm model to check the variation in the mass of cricket which could be affected by age(numeric), altitude(High or low), temperature(two different temperatures) and the incubators(4 different incubators) they are kept it.
I have tried the glm model which seems to be fine theoretically. The data on excel is all checked as well. I am assuming I have to convert some of the data into binary or some sorts.
glm(crick$Mass ~ crick$Altitude*crick$Age + crick$Altitude*crick$suare(Altitude) + 1/crick$Nymph.ID + 1/crick$Population + crick$Temperature*crick$Altitude + 1/crick$Incubator)
This is the code I am trying to run.
Error in eval(predvars, data, env) : attempt to apply non-function this is the error message.
You need to properly specify the formula (e.g. see examples in ?glm). Try
glm(Mass ~ Altitude*Age + 1/Nymph.ID + 1/Population + Temperature*Altitude + 1/Incubator, data = cricks)
Note that I excluded Altitude*suare(Altitude) which is invalid R syntax for formulas. You'll have to explain more what you're trying to go with here.
I'm trying to use esttab to output regression results in R. However, every time I run it I get an error:
Error in FUN(X[[i]], ...) : variable names are limited to 10000 bytes
. Any ideas how to solve it? My code is below:
reg <- lm(y ~ ln_gdp + diffxln_gdp + diff + year, data=df)
eststo(reg)
esttab(store=reg)
The input data comes from approx 25,000 observations. It's all coded as numeric. I can share more information that is deemed relevant but I don't know what that would be right now.
Thanks!