Create Matrix of factors from data frame in R - r

I want to convert a factor data frame, to a factor matrix
But when I try the below code, the type of the matrix is still string
mydata=data.frame(f1=c("yes","yes","no","no"),f2=c("yes","no","no","yes"))
mydata[1:ncol(mydata)]=lapply(mydata[1:ncol(mydata)],factor)
mymatrix=as.matrix(mydata)
#this line didn't help (the matrix still string)
mymatrix=apply(mymatrix,FUN =factor,MARGIN = 2)

Maybe this will get you what you need?
mymatrix = matrix(mydata, ncol = 2)
str(mymatrix)
gives you
List of 2
$ : Factor w/ 2 levels "no","yes": 2 2 1 1
$ : Factor w/ 2 levels "no","yes": 2 1 1 2
- attr(*, "dim")= int [1:2] 1 2
You would need to explain a bit more what you want to do to get more precise help.

Related

Converting data frame into numeric sparseMatrix in R

I have a data.frame with 3 columns. The structure of the data.fame is as below
str(data)
'data.frame': 76971772 obs. of 3 variables:
$ V1: chr "XH104_AACGAGAGCTAAACTAGCCCTA" "XH104_AACGAGAGCTAAACTAGCCCTA" "XH104_AACGAGAGCTAAACTAGCCCTA" "XH104_AACGAGAGCTAAACTAGCCCTA" ...
$ V2: chr "10:100175000-100180000" "10:101065000-101070000" "10:101550000-101555000" "10:101585000-101590000" ...
$ V3: int 2 2 2 2 10 1 2 2 2 2 ...
I am trying to convert it into sparseMatrix such that the row name of sparseMatrix is data$V1 and the column name is data$V2. I am using the command given below to do that.
sparse.data <- with(data, sparseMatrix(i=as.numeric(V1), j=as.numeric(V2), x=V3, dimnames=list(levels(V1), levels(V2))))
I keep getting the this error.
Error in sparseMatrix(i = as.numeric(V1), j = as.numeric(V2), x = V3, :
NA's in (i,j) are not allowed
I realized that when I use i=as.numeric(V1) in my command, all the values of V1 become NA.
Can someone suggest how can I solve this error?

Numbers in quotes read as numerical variable in R

I have a dataset where there are many columns with numbers in quotes which indicates that a variable is a factor. (ex: "8").
read.table automatically converts them in numerical variables even if stringsAsFactor is set as true.
Suppose I cannot convert them manually with as.factor, how can I import this dataset with those variables coded directly as factor?
That's because of the quote option. Set quote="". Example:
t <- '"1" "3"
"2" "4"'
> str(read.table(text=t))
'data.frame': 2 obs. of 2 variables:
$ V1: int 1 2
$ V2: int 3 4
> str(read.table(text=t, quote=""))
'data.frame': 2 obs. of 2 variables:
$ V1: Factor w/ 2 levels "\"1\"","\"2\"": 1 2
$ V2: Factor w/ 2 levels "\"3\"","\"4\"": 1 2

NA/NaN/Inf in foreign function error with mantelhaen.test()

I have a 100k row dataframe on which I want to compute a Cochran–Mantel–Haenszel test.
My variables are the educational level and a computed score factored in quantiles, and my grouping variable is the sex, and the code line looks like this :
mantelhaen.test(db$education, db$score.grouped, db$sex)
This code throws this error and warning :
Error in qr.default(a, tol = tol) : NA/NaN/Inf in foreign function call (arg 1)
In addition: Warning message: In ntot * rowsums : NAs produced by integer overflow
The error seems to be caused by my first variable, as on 7 variables tested I got the problem with only 2 of them, which seems to share no obvious common thing.
Missing values and factor levels don't seem to differ between variables which throws error and variable which doesn't. I tried with complete cases (with na.omit) and the problem persists.
What does trigger this error ? does it mean ?
How can I get rid of it ?
Interesting posts : R: NA/NaN/Inf in foreign function call (arg 1), What is integer overflow in R and how can it happen?
ADDENDUM : Here is the result of str (failures are education and imc.cl):
str(db[c("education","score.grouped","sex", ...)])
'data.frame': 104382 obs. of 7 variables:
$ age.cl: Ord.factor w/ 5 levels "<30 ans"<"30-40 ans"<..: 5 2 1 1 3 4 2 3 4 4 ...
..- attr(*, "label")= chr "age"
$ emploi2 : Factor w/ 8 levels "Agriculteurs exploitants",..: 3 5 6 8 8 8 8 3 3 3 ...
..- attr(*, "label")= chr "CSP"
$ tabac : Factor w/ 4 levels "ancien fumeur",..: 4 1 4 4 3 4 4 1 4 4 ...
..- attr(*, "label")= chr "tabac"
$ situ_mari2 : Factor w/ 3 levels "Vit seul","Divorsé, séparé ou veuf",..: 3 2 1 1 1 3 1 3 2 3 ...
..- attr(*, "label")= chr "marriage"
$ education : Factor w/ 3 levels "Universitaire",..: 1 1 1 1 3 1 1 1 1 1 ...
$ revenu.cl : Factor w/ 4 levels "<1800 euros/uc",..: 3 4 2 NA 4 1 1 4 4 1 ...
$ imc.cl : Ord.factor w/ 6 levels "Maigre"<"Normal"<..: 2 2 1 2 3 1 3 2 2 3 ...
..- attr(*, "label")= chr "IMC"
EDIT : by diving inside the function, the error and warning are caused by a call to qr.solve. I don't understand anything about this but I'll try to dive deeper
EDIT2 : inside qr.solve, the error is thrown by a Fortran call to .F_dqrdc2. This is so much beyond my level my nose is starting to bleed.
EDIT3 : I tried to head my data to find out which line is in cause :
db2 = db %>% head(99787) #fails at 99788
db2 = db %>% tail(99698) #fails at 99699
mantelhaen.test(db2$education, db2$score.grouped, db2$sex)
This gives me not much information but maybe it could give you.
I was able to replicate the problem by making the data set bigger.
set.seed(101); n <- 500000
db <- data.frame(education=
factor(sample(1:3,replace=TRUE,size=n)),
score=
factor(sample(1:5,replace=TRUE,size=n)),
sex=
sample(c("M","F"),replace=TRUE,size=n))
After this, mantelhaen.test(db$education, db$score, db$sex) gives the reported error.
Thankfully, the real problem is not within the guts of the QR decomposition code: rather, it occurs when setting up a matrix prior to QR decomposition. There are two computations, ntot*colsums and ntot*rowsums, that overflow R's capacity for integer computation. There's a relatively easy way to work around this by creating a modified version of the function:
copy the source code: dump("mantelhaen.test",file="my_mh.R")
edit the source code
l. 1: modify name of function to my_mantelhaen.test (to avoid confusion)
lines 199 and 200: change ntot to as.numeric(ntot), converting the integer to double precision before the overflow happens
source("my_mh.R") to read in the new function
Now
my_mantelhaen.test(db$education, db$score, db$sex)
should work.
You should definitely test the new function against the old function for cases where it works to make sure you get the same answer.
Now posted to the R bug list, we'll see what happens ...
update 11 May 2018: this is fixed in the development version of R (3.6 to be).

R anesrake issue with list names non-binary argument

I am using anesrake to weight some survey data, but am getting a non-binary argument error. The error only occurs after I have added the names to the list to use as targets:
gender1<-c(0.516166000986901,0.483833999013099)
age<-c(0.15828262425613,0.364861110549873,0.429947760183493,0.0469085050104993)
mylist<-list(gender1,age)
names(mylist)<-c("gender1","age")
result<-anesrake(mylist,france,caseid=france$caseid, iterate=TRUE)
Error in x + weights : non-numeric argument to binary operator
In addition: Warning message:
In anesrake(targets, france, caseid = france$caseid, iterate = TRUE) :
Targets for age do not sum to 100%. Adjusting values to total 100%
This also says that the targets for age don't add to 100%, which they do, so also not sure what that's about. If I leave out the names(mylist) bit, I get the following error, presumably because R doesn't know which variables to use, but not a non-binary error:
Error in selecthighestpcts(discrep1, inputter, pctlim) :
No variables are off by more than 5 percent using the method you have chosen, either weighting is unnecessary or a smaller pre-raking limit should be chosen.
The variables in the data frame are called the same as the targets in the list, and are numeric:
> str(france)
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 993 obs. of 5 variables:
$ Gender :Classes 'labelled', 'numeric' atomic [1:993] 2 2 2 2 2 2 2 2 2 2 ...
.. ..- attr(*, "label")= chr "Gender"
$ Age2 : num 2 3 2 2 2 2 2 1 2 3 ...
$ gender1: num 2 2 2 2 2 2 2 2 2 2 ...
$ caseid : int 1 2 3 4 5 6 7 8 9 10 ...
$ age : num 2 3 2 2 2 2 2 1 2 3 ...
I have also tried converting gender1 and age to factor variables (as the numbers represent levels of each variable - gender has 2, age has 4), but with the same result. I have used anesrake before successfully, so there must be something I am missing, but cannot see it! Any help greatly appreciated....

Override [.data.frame to drop unused factor levels by default

The issue of dropping unused factor levels when subsetting has come up before. Common solutions include using character vectors where possible by declaring
options(stringsAsFactors = FALSE)
Sometimes, though, ordered factors are necessary for plotting, in which case we can use convenience functions like droplevels to create a wrapper for subset:
subsetDrop <- function(...){droplevels(subset(...))}
I realize that subsetDrop mostly solves this problem, but there are some situations where subsetting via [ is more convenient (and less typing!).
My question is how much further, for the sake of convenience, can we push this to be the 'default' behavior of R by overriding [ for data frames to automatically drop factor levels. For instance, the Hmisc package contains dropUnusedLevels which overrides [.factor for subsetting a single factor (which is no longer necessary, since the default [.factor appears to have a drop argument for dropping unused levels). I'm looking for a similar solution that would allow me to subset data frames using [ but automatically dropping unused factor levels (and of course preserving order in the case of ordered factors).
I'd be really wary of changing the default behavior; you never know when another function you use depends on the usual default behavior. I'd instead write a similar function to your subsetDrop but for [, like
sel <- function(x, ...) droplevels(x[...])
Then
> d <- data.frame(a=factor(LETTERS[1:5]), b=factor(letters[1:5]))
> str(d[1:2,])
'data.frame': 2 obs. of 2 variables:
$ a: Factor w/ 5 levels "A","B","C","D",..: 1 2
$ b: Factor w/ 5 levels "a","b","c","d",..: 1 2
> str(sel(d,1:2,))
'data.frame': 2 obs. of 2 variables:
$ a: Factor w/ 2 levels "A","B": 1 2
$ b: Factor w/ 2 levels "a","b": 1 2
If you really want to change the default, you could do something like
foo <- `[.data.frame`
`[.data.frame` <- function(...) droplevels(foo(...))
but make sure you know how namespaces work as this will work for anything called from the global namespace but the version in the base namespace is unchanged. Which might be a good thing, but it's something you want to make sure you understand. After this change the output is as you'd like.
> str(d[1:2,])
'data.frame': 2 obs. of 2 variables:
$ a: Factor w/ 2 levels "A","B": 1 2
$ b: Factor w/ 2 levels "a","b": 1 2
you can do that work by overwriting the default value of the drop argument like this:
formals(`[.factor`)$drop <- TRUE
UPDATE
as for data.frame, you can do by:
`[.data.frame` <- function(...)droplevels(base::`[.data.frame`(...))
actually similar as #Aaron's one.
if you want to cancel this behavior, then:
rm(`[.data.frame`)
will do that.
> d <- data.frame(a=letters[1:10], b=LETTERS[1:10])
> str(d[1:5, ])
'data.frame': 5 obs. of 2 variables:
$ a: Factor w/ 10 levels "a","b","c","d",..: 1 2 3 4 5
$ b: Factor w/ 10 levels "A","B","C","D",..: 1 2 3 4 5
> `[.data.frame` <- function(...)droplevels(base::`[.data.frame`(...))
> str(d[1:5, ])
'data.frame': 5 obs. of 2 variables:
$ a: Factor w/ 5 levels "a","b","c","d",..: 1 2 3 4 5
$ b: Factor w/ 5 levels "A","B","C","D",..: 1 2 3 4 5
> rm(`[.data.frame`)
> str(d[1:5, ])
'data.frame': 5 obs. of 2 variables:
$ a: Factor w/ 10 levels "a","b","c","d",..: 1 2 3 4 5
$ b: Factor w/ 10 levels "A","B","C","D",..: 1 2 3 4 5
I think that changing the default is very dangerous, see my response here.
Most cases where people are concerned with dropping factor levels you either really don't need to (sumarizing something that you forced to have 1 value is silly) or there is a better way to accomplish what you are trying. The possible side effects from auto dropping is potentially worse than the couple of keystrokes saved. Also if you are doing reproducable research then you should not be depending on or even allowing the computer to change data without your specific say so.

Resources