I have a 100k row dataframe on which I want to compute a Cochran–Mantel–Haenszel test.
My variables are the educational level and a computed score factored in quantiles, and my grouping variable is the sex, and the code line looks like this :
mantelhaen.test(db$education, db$score.grouped, db$sex)
This code throws this error and warning :
Error in qr.default(a, tol = tol) : NA/NaN/Inf in foreign function call (arg 1)
In addition: Warning message: In ntot * rowsums : NAs produced by integer overflow
The error seems to be caused by my first variable, as on 7 variables tested I got the problem with only 2 of them, which seems to share no obvious common thing.
Missing values and factor levels don't seem to differ between variables which throws error and variable which doesn't. I tried with complete cases (with na.omit) and the problem persists.
What does trigger this error ? does it mean ?
How can I get rid of it ?
Interesting posts : R: NA/NaN/Inf in foreign function call (arg 1), What is integer overflow in R and how can it happen?
ADDENDUM : Here is the result of str (failures are education and imc.cl):
str(db[c("education","score.grouped","sex", ...)])
'data.frame': 104382 obs. of 7 variables:
$ age.cl: Ord.factor w/ 5 levels "<30 ans"<"30-40 ans"<..: 5 2 1 1 3 4 2 3 4 4 ...
..- attr(*, "label")= chr "age"
$ emploi2 : Factor w/ 8 levels "Agriculteurs exploitants",..: 3 5 6 8 8 8 8 3 3 3 ...
..- attr(*, "label")= chr "CSP"
$ tabac : Factor w/ 4 levels "ancien fumeur",..: 4 1 4 4 3 4 4 1 4 4 ...
..- attr(*, "label")= chr "tabac"
$ situ_mari2 : Factor w/ 3 levels "Vit seul","Divorsé, séparé ou veuf",..: 3 2 1 1 1 3 1 3 2 3 ...
..- attr(*, "label")= chr "marriage"
$ education : Factor w/ 3 levels "Universitaire",..: 1 1 1 1 3 1 1 1 1 1 ...
$ revenu.cl : Factor w/ 4 levels "<1800 euros/uc",..: 3 4 2 NA 4 1 1 4 4 1 ...
$ imc.cl : Ord.factor w/ 6 levels "Maigre"<"Normal"<..: 2 2 1 2 3 1 3 2 2 3 ...
..- attr(*, "label")= chr "IMC"
EDIT : by diving inside the function, the error and warning are caused by a call to qr.solve. I don't understand anything about this but I'll try to dive deeper
EDIT2 : inside qr.solve, the error is thrown by a Fortran call to .F_dqrdc2. This is so much beyond my level my nose is starting to bleed.
EDIT3 : I tried to head my data to find out which line is in cause :
db2 = db %>% head(99787) #fails at 99788
db2 = db %>% tail(99698) #fails at 99699
mantelhaen.test(db2$education, db2$score.grouped, db2$sex)
This gives me not much information but maybe it could give you.
I was able to replicate the problem by making the data set bigger.
set.seed(101); n <- 500000
db <- data.frame(education=
factor(sample(1:3,replace=TRUE,size=n)),
score=
factor(sample(1:5,replace=TRUE,size=n)),
sex=
sample(c("M","F"),replace=TRUE,size=n))
After this, mantelhaen.test(db$education, db$score, db$sex) gives the reported error.
Thankfully, the real problem is not within the guts of the QR decomposition code: rather, it occurs when setting up a matrix prior to QR decomposition. There are two computations, ntot*colsums and ntot*rowsums, that overflow R's capacity for integer computation. There's a relatively easy way to work around this by creating a modified version of the function:
copy the source code: dump("mantelhaen.test",file="my_mh.R")
edit the source code
l. 1: modify name of function to my_mantelhaen.test (to avoid confusion)
lines 199 and 200: change ntot to as.numeric(ntot), converting the integer to double precision before the overflow happens
source("my_mh.R") to read in the new function
Now
my_mantelhaen.test(db$education, db$score, db$sex)
should work.
You should definitely test the new function against the old function for cases where it works to make sure you get the same answer.
Now posted to the R bug list, we'll see what happens ...
update 11 May 2018: this is fixed in the development version of R (3.6 to be).
Related
I am using a package called diagmeta for meta-analysis purposes. I can use this package with a built in data set called Schneider2017. However when I make my own database/data set I get the following error:
Error: number of observations (=300) <= number of random effects (=3074) for term (Group * Cutoff | Study); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable
Another thread here on SO suggests the error is caused by the data format of one or more columns. I have made sure every column's data type matches that in the Schneider2017 dataset - no effect.
Link to the other thread
I have tried extracting all of the data from the Schneider2017 dataset into excel and then importing a dataset from Excel through R studio. This again makes no difference. This suggests to me that something in the data format could be different, although I can't see how.
diag2 <- diagmeta(tpos, fpos, tneg, fneg, cutpoint,
studlab = paste(author,year,group),
data = SRschneider,
model = "DIDS", log.cutoff = FALSE,
check.nobs.vs.nRE = "ignore")
The dataset looks like this:
I expected the same successful execution and plotting as with the built-in data set, but keep getting this error.
Result from doing str(mydataset):
> str(SRschneider)
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 150 obs. of 10 variables:
$ ...1 : num 1 2 3 4 5 6 7 8 9 10 ...
$ study_id: num 1 1 1 1 1 1 1 1 1 1 ...
$ author : chr "Arora" "Arora" "Arora" "Arora" ...
$ year : num 2006 2006 2006 2006 2006 ...
$ group : chr NA NA NA NA ...
$ cutpoint: chr "6" "7.0" "8.0" "9.0" ...
$ tpos : num 133 131 130 127 119 115 113 110 102 98 ...
$ fneg : num 5 7 8 11 19 23 25 28 36 40 ...
$ fpos : num 34 33 31 30 28 26 25 21 19 19 ...
$ tneg : num 0 1 3 4 6 8 9 13 15 15 ...
Just a quick follow-up on Ben's detailed answer.
The statistical method implemented in diagmeta() expects that argument cutpoint is a continuous variable. We added a corresponding check for argument cutpoint (as well as arguments TP, FP, TN, and FN) in version 0.3-1 of R package diagmeta; see commit in GitHub repository for technical details.
Accordingly, the following R commands will result in a more informative error message:
data(Schneider2017)
diagmeta(tpos, fpos, tneg, fneg, as.character(cutpoint),
studlab = paste(author, year, group), data = Schneider2017)
You said that you
have made sure every column's data type matches that in the Schneider2017 dataset
but that doesn't seem to be true. Besides differences between num (numeric) and int (integer) types (which actually aren't typically important), your data has
$ cutpoint: chr "6" "7.0" "8.0" "9.0" ...
while str(Schneider2017) has
$ cutpoint: num 6 7 8 9 10 11 12 13 14 15 ...
Having your cutpoint be a character rather than numeric means that R will try to treat it as a categorical variable (with many discrete levels). This is very likely the source of your problem.
The cutpoint variable is likely a character because R encountered some value in this column that can't be interpreted as numeric (something as simple as a typographic error). You can use SRschneider$cutpoint <- as.numeric(SRschneider$cutpoint) to convert the variable to numeric by brute force (values that can't be interpreted will be set to NA), but it would be better to go upstream and see where the problem is.
If you use tidyverse packages to load your data you should get a list of "parsing problems" that may be useful. You can also try cp <- SRschneider$cutpoint; cp[which(is.na(as.numeric(cp)))] to look at the values that can't be converted.
I have been tearing my hair out over this for the last hour, the following code was working perfectly a couple of hours ago, and now I have no idea why it doesn't anymore. I have searched for other questions regarding the undefined columns selected error, but I think I have corrected for all of the info in those answers. I am sure there is some tiny thing I have overlooked or accidently left in, but I can't see it!
I have a data frame with both factor and numeric variables, I want to subset so that I keep all of the factor variables, and remove numeric variables whose columns have a mean < 0.1.
I found the following code on another question on stackoverflow, which slightly modified worked well on my test data (smaller sub-dataset I am using for testing before trying out code on a big 3GB object)
meanfunction01 <- function(x){
if(is.numeric(x)){
mean(x) > 0.1
} else {
TRUE}
}
#then apply function to data table
Zdata <- Data1[,sapply(Data1, meanfunction01)]
I swear I was using this a few hours ago, then when i came back to it and tried to use it again it stopped working and now just returns the following error:
Error in `[.data.frame`(Data1, , sapply(Data1, meanfunction01)) :
undefined columns selected
I was trying to modify the function so that it would loop over multiple objects (I have 54 objects I want to apply it to, and didn't want to type them all manually), but I don't think I edited the original function, and now it has stopped working.
A brief str() of my data:
> str(Data1[1:10])
'data.frame': 11 obs. of 10 variables:
$ Name : Factor w/ 11688 levels "GTEX-1117F-0226-SM-5GZZ7",..: 8186 8242 8262 8270 8343 8388 8403 8621 8689 8709 ...
$ SEX : Factor w/ 2 levels "Female","Male": 1 2 2 1 1 2 2 1 2 1 ...
$ AGE : Factor w/ 6 levels "20-29","30-39",..: 4 4 1 3 3 1 3 3 3 2 ...
$ CIRCUMSTANCES: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Tissue.x : Factor w/ 53 levels "Adipose_Subcutaneous",..: 7 7 7 7 7 7 7 7 7 7 ...
$ ENSG00000223972.4 : num 0 0.0701 0.0339 0.1149 0.0549 ...
$ ENSG00000227232.4 : num 12.5 17.2 13.1 16 15.7 ...
$ ENSG00000243485.2 : num 0.0717 0 0.1508 0 0.061 ...
$ ENSG00000237613.2 : num 0 0.0654 0 0.0402 0.0768 ...
$ ENSG00000268020.2 : num 0 0.0421 0.0611 0 0 ...
So if your only issue is changing the class of the integer variables in your data.frame but you have many columns (>10000) you may want to consider converting your data.frame into a data.table. Your code would then look like this:
library(data.table)
Data1<-data.table(Data1) #or if you have your data in csv document just use fread instead of read.csv which will automatically give you a data.table.
Then you just need to find the integer columns using this:
which(sapply(Data1,is.integer))
Putting it altogether using the data.table commands:
Data1[,which(sapply(Data1,is.integer)):=lapply(.SD,as.numeric),.SDcols=which(sapply(Data1,is.integer))]
Note you don't need to assign the above line of code into anything since data.table uses pointers which makes it much faster than data.frame or tibbles objects. So running the above line will update your Data1 object efficiently. The classes of the other non-integer columns (i.e., factors) will remain unchanged.
Please update if you have further questions but this should answer your comment. Best of luck!
I have written a R script which successfully runs and predicts output but only when csv with multiple entries is passed as input to classifier.
training_set = read.csv('finaldata.csv')
library(randomForest)
set.seed(123)
classifier = randomForest(x = training_set[-5],
y = training_set$Song,
ntree = 50)
test_set = read.csv('testSet.csv')
y_pred = predict(classifier, newdata = test_set)
Above code runs succesfully, but instead of giving 10+ inputs to classifier, I want to pass a data.frame as single input to this classifier. That works in other classifier except this, why?
So following code doesn't work and throws error -
y_pred = predict(classifier, data.frame(Emot="happy",Pact="Walking",Mact="nothing",Session="morning"))
Error in predict.randomForest(classifier, data.frame(Emot = "happy", :
Type of predictors in new data do not match that of the training data.
I even tried keeping single entry in testinput.csv, still throws same error! How to solve it? This code is back-end of my another code and I want only single entry to pass as test to predict results. Also all are 'factors' in training as well as testing set. Help appreciated.
PS: Previous solutions to same error, didn't help me.
str(test_set)
'data.frame': 1 obs. of 5 variables:
$ Emot : Factor w/ 1 level "fear": 1
$ Pact : Factor w/ 1 level "Bicycling": 1
$ Mact : Factor w/ 1 level "browsing": 1
$ Session: Factor w/ 1 level "morning": 1
$ Song : Factor w/ 1 level "Dusk Till Dawn.mp3": 1
str(training_set)
'data.frame': 1052 obs. of 5 variables:
$ Emot : Factor w/ 8 levels "anger","contempt",..: 4 7 6 6 4 3 4 6 4 6 ...
$ Pact : Factor w/ 5 levels "Bicycling","Driving",..: 1 2 2 2 4 3 1 1 3 4 ...
$ Mact : Factor w/ 6 levels "browsing","chatting",..: 1 6 1 4 5 1 5 6 6 6 ...
$ Session: Factor w/ 4 levels "afternoon","evening",..: 3 4 3 2 1 3 1 1 2 1 ...
$ Song : Factor w/ 101 levels "Aaj Ibaadat.mp3",..: 29 83 47 72 29 75 77 8 30 53 ...
Ohk this worked successfully, weird solution. Equalized classes of training and test set. Following code binds the first row of training set to the test set and than delete it.
test_set <- rbind(training_set[1, ] , test_set)
test_set <- test_set[-1,]
done! it works for single input as well as single entry .csv file, without bringing error in model.
I am using anesrake to weight some survey data, but am getting a non-binary argument error. The error only occurs after I have added the names to the list to use as targets:
gender1<-c(0.516166000986901,0.483833999013099)
age<-c(0.15828262425613,0.364861110549873,0.429947760183493,0.0469085050104993)
mylist<-list(gender1,age)
names(mylist)<-c("gender1","age")
result<-anesrake(mylist,france,caseid=france$caseid, iterate=TRUE)
Error in x + weights : non-numeric argument to binary operator
In addition: Warning message:
In anesrake(targets, france, caseid = france$caseid, iterate = TRUE) :
Targets for age do not sum to 100%. Adjusting values to total 100%
This also says that the targets for age don't add to 100%, which they do, so also not sure what that's about. If I leave out the names(mylist) bit, I get the following error, presumably because R doesn't know which variables to use, but not a non-binary error:
Error in selecthighestpcts(discrep1, inputter, pctlim) :
No variables are off by more than 5 percent using the method you have chosen, either weighting is unnecessary or a smaller pre-raking limit should be chosen.
The variables in the data frame are called the same as the targets in the list, and are numeric:
> str(france)
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 993 obs. of 5 variables:
$ Gender :Classes 'labelled', 'numeric' atomic [1:993] 2 2 2 2 2 2 2 2 2 2 ...
.. ..- attr(*, "label")= chr "Gender"
$ Age2 : num 2 3 2 2 2 2 2 1 2 3 ...
$ gender1: num 2 2 2 2 2 2 2 2 2 2 ...
$ caseid : int 1 2 3 4 5 6 7 8 9 10 ...
$ age : num 2 3 2 2 2 2 2 1 2 3 ...
I have also tried converting gender1 and age to factor variables (as the numbers represent levels of each variable - gender has 2, age has 4), but with the same result. I have used anesrake before successfully, so there must be something I am missing, but cannot see it! Any help greatly appreciated....
I want to convert a factor data frame, to a factor matrix
But when I try the below code, the type of the matrix is still string
mydata=data.frame(f1=c("yes","yes","no","no"),f2=c("yes","no","no","yes"))
mydata[1:ncol(mydata)]=lapply(mydata[1:ncol(mydata)],factor)
mymatrix=as.matrix(mydata)
#this line didn't help (the matrix still string)
mymatrix=apply(mymatrix,FUN =factor,MARGIN = 2)
Maybe this will get you what you need?
mymatrix = matrix(mydata, ncol = 2)
str(mymatrix)
gives you
List of 2
$ : Factor w/ 2 levels "no","yes": 2 2 1 1
$ : Factor w/ 2 levels "no","yes": 2 1 1 2
- attr(*, "dim")= int [1:2] 1 2
You would need to explain a bit more what you want to do to get more precise help.