"system is computationally singular" error when using `gmm` (GMM Estimation) - r

Trying to use the GMM package in R to estimate the parameters (a-f) of a linear model:
LEV1 = a*Macro + b*Firm + c*Sector + d*qtr + e*fqtr + f*tax
Macro, Firm and Sector are matrices with n number of rows. qtr, fqtr and tax are vectors with n members.
I have one large data frame called unconstrd that has all of the data. First, I break that data into separate matrices:
v_LEV1 <- as.matrix(unconstrd$LEV1)
Macro <- as.matrix(cbind(unconstrd$Agg_Corp_Prof,unconstrd$R1000_TR, unconstrd$CP_Spread))
Firm <- as.matrix(cbind(unconstrd$ppe_ratio, unconstrd$op_inc_ratio_avg, unconstrd$selling_exp_avg,
unconstrd$tax_avg, unconstrd$Mark_to_Bk, unconstrd$mc_ratio))
Sector <- as.matrix(cbind(unconstrd$Sect_Flag03,
unconstrd$Sect_Flag04, unconstrd$Sect_Flag05, unconstrd$Sect_Flag06,
unconstrd$Sect_Flag07, unconstrd$Sect_Flag08, unconstrd$Sect_Flag12,
unconstrd$Sect_Flag13, unconstrd$Sect_Flag14, unconstrd$Sect_Flag15,
unconstrd$Sect_Flag17))
v_qtr <- as.matrix(unconstrd$qtr)
v_fqtr <- as.matrix(unconstrd$fqtr)
v_tax <- as.matrix(unconstrd$tax_dummy)
Then, I bind the data together for the x variable called by gmm:
h=cbind(Macro,Firm,Sector,v_qtr, v_fqtr, v_tax)
Then, I invoke gmm:
gmm1 <- gmm(v_LEV1 ~ Macro + Firm + Sector + v_qtr + v_fqtr + v_tax, x=h)
I get the message:
Error in solve.default(crossprod(hm, xm), crossprod(hm, ym)) :
system is computationally singular: reciprocal condition number = 1.10214e-18
I apologize in advance and admit that I'm a neophyte at R and I've never used GMM before. The GMM function is so general, I've looked at the examples available on the web but nothing seems specific enough to help my situation.

You are trying to fit onto a matrix which does not have full rank---try excluding some of the variable and/or look for errors. We cannot say much more without your data, or at least a sample.
That's more of a modelling question for Crossvalidated.com than a programming question for StackOverflow.

I was pretty certain there was no linear dependency between my variables but I went through the exercise of adding one variable at a time to see what was causing the errors. In the end, I asked a colleague to run GMM on SAS and it ran perfectly, no error messages. I'm not sure what the problem is with the R version is but at this point I have a solution and give u on GMM on R.
Thanks to everyone who tried to help.

Related

Error during wrapup: long vectors not supported yet: in glm() function

I found several questions on Stackoverflow regarding this topic (some of them without any answer) but nothing related (so far) with this error in regression.
I'm, running a probit model in r with (I'm guessing) too many fixed effects (year and places):
myprobit <- glm(factor(Y) ~ factor(T) + factor(X1) + factor(X2) + factor(X3) +
factor(YEAR) + factor(PLACE),
family = binomial(link = "probit"),
data = DT)
The PLACE variable has about 1000 unique values and YEAR 8 values. The dataset DT has 13,099,225 obs and 79 columns.
The error I got is:
Error: cannot allocate vector of size 59.3 Gb
Error during wrapup: long vectors not supported yet: ../include/Rinlinedfuns.h:519
The machine I'm using has 128 GB of RAM.
So, I don't know what I can do, without change the function. Does anyone know how to deal with this issue? Thanks!
In order to close this question, I have to mention that the #Axeman's answer it is the only approach feasible for my problem. The whole issue is, there is not enough memory to manage such a huge design matrix.
Therefore, run a probit regression using the biglm package and bigglm() function is the only solution I found so far.
Nevertheless, I realize, due to how the biglm package works, taking iteratively chunks of the data, the use of factor() variables in the RHS it's problematic every time when factor level is not represented in the chunk. In other words, if a factor variable has 5 levels, but in the data chunk only 4 levels appear, I will have an error in the estimation.
There are several questions and comments about this on Stackoverflow.

Why is my glmer model in R taking so long to run?

I had previously been using simple stats in Statistica, but required R for my masters research. I am trying to run the following code to test for any significant interactions, and it is just running forever. If I simplify the model by taking month out, then it runs, but biologically it makes sense that month is significant so I would really like this to run including month as a factor. Once I run the model, the stop sign in R studio just stays present for hours, what could be the reason for this? Like I said I'm very new and it has been really difficult to learn this on my own. I am working with presence/absence data (as %) which I do cbind as my dependent variable. SO far this is what my coad looks like:
library(car)
library(languageR)
library(AICcmodavg)
library(lme4)
Scat <- read.csv("Scat2.csv", header=T)
attach(Scat)
names(Scat)
y <- cbind(Present,Absent)
ScatData <- glmer(y ~ Estate * Species * Month * Content * (1|Site) + Min + Max,family=binomial)
summary(ScatData)
Once I get to running the actual model, I don't even get to do the summary because R is not done computing the results of the actual model. I ran the model for approximately 4 hours, and when I clicked on the stop sign, I received this message:
Warning message:
In (function (fn, par, lower = rep.int(-Inf, n), upper = rep.int(Inf, :
failure to converge in 10000 evaluations
I would really appreciate some input on this matter.
You have a few problems with your model specification. Your model
y ~ Estate * Species * Month * Content * (1|Site) + Min + Max
is asking for all the main effects and interactions of estate, species, month, content, and site, which is incredibly complex.
Also, you have specified site as a random effect and asked for its interaction with fixed effects. I'm not sure whether that's possible, but it certainly seems wrong. You should decide whether you want site to be a fixed effect or a random effect.
If you post a minimal replicable example, I can give more specific advice.

"Simulating" a large number of regressions with different predictor values

Let's say I have the following data and I'm interested in examining some counterfactuals. In particular, I want to examine whether there would be changes in predicted income given a change in income. The best way I can think to do this is to write a loop that runs this regression 1:n. However, how do I also make adjustments to the data frame while running through the loop. I'm really hoping that there is a base R function or something in a package that someone can point me to.
df = data.frame(year=c(2000,2001,2002,2003,2004,2005,2006,2007,2009,2010),
income=c(100,50,70,80,50,40,60,100,90,80),
age=c(26,30,35,30,28,29,31,34,20,35),
gpa=c(2.8,3.5,3.9,4.0,2.1,2.65,2.9,3.2,3.3,3.1))
df
mod = lm(income ~ age + gpa, data=df)
summary(mod)
Here are some counter factuals that may be worth considering when looking at the relationship between age, gpa, and income.
# What is everyone in the class had a lower/higher gpa?
df$gpa2 = df$gpa + 0.55
# what if one person had a lower/higher gpa?
df$gpa2[3] = 1.6
# what if the most recent employee/person had a lower/higher gpa?
df[10,4] = 4.0
With or without looping, what would be the best way to "simulate" a large (1000+) number of regression models in order examine various counter factuals, and then save those results in some data structure? Is there a "counter factual" analysis package which could save me a bit of work?

adding interaction to GEE - model matrix is rank deficient

I am trying to run a GEE model, using geepack. I have done this successfully, using the below call.
Call:
geeglm(formula = pdc1 ~ country + post + time_post +
TIME + age + sex + country * time_post + country * post, family = gaussian("identity"), data = lipid_data,
id = id, waves = ID, corstr = "ar1", std.err = "san.se").
where:
pdc1=numeric
country=factor
post=factor
time_post=numeric
TIME=numeric
I'm trying to run the exact same model on different data, which are in the exact same format as above. I can run the model with main effects, but not with the interactions. this is the error I get:
Error in geeglm(pdc1 ~ STATE + post + time_post + TIME + STATE * post, :
Model matrix is rank deficient; geeglm can not proceed
I have tried recoding STATE as a numeric variable (and post) but this does not prove fruitful. I don't understand whats going on, the variables hold the exact same data as the first model, and are coded the way. Does anyone know what could be going on here?
I have recently solved a similar problem - it related to having one of the covariates constant across the dataset. For example, STATE might be the same state for each observation which may be causing the error "rank deficient". This might also explain why this error is specific to your new dataset.
I am not sure if you encountered the same problem as mine. I used to fit a geeglm model in R and got the same problem.
My model is something like:
geeglm(outCC$counts~outCC$partage:as.factor(outCC$contage)-1,
family=poisson(link="log"),id=outCC$partid,data=outCC,subset=sel,weights=outCC$partwe)
This model runs without any problem.
However, when I tried to put another binary covariates setting (Urban/Rural) into the model:
geeglm(outCC$counts~outCC$partage:as.factor(outCC$contage)+as.factor(setting)-1,
family=poisson(link="log"),id=outCC$partid,data=outCC,subset=sel,weights=outCC$partwe)
R gave me the error:
Model matrix is rank deficient; geeglm can not proceed
Then I decided to create a new binary variable: setting_new (0/1) and put this new variable in the mode:
geeglm(outCC$counts~outCC$partage:as.factor(outCC$contage)+setting_new-1,
family=poisson(link="log"),id=outCC$partid,data=outCC,subset=sel,weights=outCC$partwe)
And now the problem is solved.
I got the same problem when trying to put another categorical variables into the above mentioned model, with the syntax:
as.factor(Q12_travel)
And the model could not run until I tried to create new dummy variables (0/1) and put it back to the model.
Maybe you have another problem, but I suggest you can try this approach to see if the problem is solved.

Error in fitting a model with gee(): NA/NaN/Inf in foreign function call (arg 3)

I'm fitting a gee model on a dataset including 13,500 observations (here students). Students are grouped into 52 different schools. I know that there is evidence that students are nested within schools (low ICC) and therefore I should adjust this nesting effect in the variance covariance matrix. What I'm planning to do is to first fit a gee model with exchangeable var-cov structure. Then, on top of that, I'll run Huber-White Sandwich estimator also known as robust variance estimator. I wrote my own code for robust variance estimator and it works perfectly. My gee statement doesn't work and give the error below:
NA/NaN/Inf in foreign function call (arg 3)
Here is my code:
STMath.OneYr.C1 = gee(postCSTMath1Yr ~ TRT1Yr + preCSTMath + preCSTENG +
post1YrGradeRef + ELLBaseLine + GENDER + ECODIS + ETHNICITY.F +
as.factor(FailedInd1Yr), data = UCI.clone[UCI.clone$COHORT0809 == "C1",],
id = post1YrSchIID, corstr = "exchangeable")
Unfortunately, the code above is not reproducible for you guys and perhaps difficult to figure out what the issue is.
I appreciate if you could help me figure out to solve the issue.
OK, this question is quite old but I ended up here, so this might help someone eventually.
Basically, this error was caused because unlike in other libraries, the id parameter is treated as a numeric vector.
Indeed, the gee function is casting id as a double, which I don't really understand. Here are the implicated lines (l. 119-120 of the function):
if (!(is.double(id)))
id <- as.double(id)
If your id column is a character, just cast it to a factor, or use some function (like dplyr::min_rank) to turn it to a numeric variable.
This should do the trick.

Resources