Regress each column in a data frame on a vector in R - r

I want to regress each column in a data set on a vector then return the column which has the highest R-squared value. e.g. I have a vector HAPPY <- (3,2,2,3,1,3,1,3) and I have a data set.
HEALTH CONINC MARITAL SATJOB1 MARITAL2 HAPPY
3 441 5 1 2 3
1 1764 5 1 2 2
2 3087 5 1 2 2
3 3087 5 1 2 3
1 3969 2 1 5 1
1 3969 5 1 2 3
2 4852 5 1 2 2
3 5734 3 1 3 3
Regress "Happy" on each of the columns in the data set on the left, then return the column which has the highest R-squared. Example: lm(Health ~ Happy) if Health had the highest R-squared value, then return Health.
I've tried apply, but can't seem to figure out how to return the regression with the highest R-squared. Any suggestions?

I would break this up into two steps:
1) Determine R-squares for each model
2) Determine which is the highest value
mydf<-data.frame(aa=rpois(8,4),bb=rpois(8,2),cc=rbinom(8,1,.5),
happy=c(3,2,2,3,1,3,1,3))
myRes<-sapply(mydf[-ncol(mydf)],function(x){
mylm<-lm(x~mydf$happy)
theR2<-summary(mylm)$r.squared
return(theR2)
})
names(myRes[which(myRes==max(myRes))])
This was assuming that happy is in your data.frame.

This will do what you want, assuming your data.frame is called 'd'
r2s <- apply(d, 2, function(x) summary(lm(x ~ HAPPY))$r.squared)
names(d)[which.max(r2s)]
You can find out how to extract components of the model, or in this case, a summary of the model, with the str() command. It will give you a read out that helps you access the components of any complex object.

Here's a solution using the colwise() function from the plyr package.
library(plyr)
df = data.frame(a = runif(10), b=runif(10), c=runif(10), d = runif(10))
Rsq = function(x) summary(lm(df$a ~ x))$r.squared
Rsqall = colwise(Rsq)(df[, 2:4])
Rsqall
names(Rsqall)[which.max(Rsqall)]

Related

Splitting data and fitting distributions efficiently

For a project I have received a large amount of confidential patient level data that I need to fit a distribution to so as to use it in a simulation model. I am using R.
The problem is that I need is to fit the distribution to get the shape/rate data for at least 288 separate distributions (at least 48 subsets of 6 variables). The process will vary slightly between variables (depending on how that variable is distributed) but I want to be able to set up a function or loop for each variable and generate the shape and rate data for each subset I define.
An example of this: I need to find length of stay data for subsets of patients. There are 48 subsets of patients. The way I have currently been doing this is by manually filtering the data and then extracting those to vectors, and then fitting the data to the vector using fitdist.
i.e. For a variable that is gamma distributed:
vector1 <- los_data %>%
filter(group == 1, setting == 1, diagnosis == 1)
fitdist(vector1, "gamma")
I am quite new to data science and data processing, and I know there must be a simpler way to do this than by hand! I'm assuming something to do with a matrix, but I am absolutely clueless about how best to proceed.
One common practice is to split the data using split and then apply the function of interest on that group. Let's assume here we have four columns, group, settings, diagnosis and stay.length. The first three have two levels.
df <- data.frame(
group = sample(1:2, 64, TRUE),
setting = sample(1:2, 64, TRUE),
diagnosis = sample(1:2, 64, TRUE),
stay.length = sample(1:5, 64, TRUE)
)
> head(df)
group setting diagnosis var
1 1 1 1 4
2 1 1 2 5
3 1 1 2 4
4 2 1 2 3
5 1 2 2 3
6 1 1 2 5
Perform split and you will get a splitted List :
dfl <- split(df$stay.length, list(df$group, df$setting, df$diagnosis))
> head(dfl)
$`1.1.1`
[1] 5 3 4 1 4 5 4 2 1
$`2.1.1`
[1] 5 4 5 4 3 1 5 3 1
$`1.2.1`
[1] 4 2 5 4 5 3 5 3
$`2.2.1`
[1] 2 1 4 3 5 4 4
$`1.1.2`
[1] 5 4 4 4 3 2 4 4 5 1 5 5
$`2.1.2`
[1] 5 4 4 5 3 2 4 5 1 2
Afterwards, we can use lapply to perform whatever function on each group in the list. For example we can apply mean
dflm <- lapply(dfl, mean)
> dflm
$`1.1.1`
[1] 3.222222
.
.
.
.
$`2.2.2`
[1] 2.8
In your case, you can apply fitdist or any other function.
dfl.fitdist <- lapply(dfl, function(x) fitdist(x, "gamma"))
> dfl
$`1.1.1`
Fitting of the distribution ' gamma ' by maximum likelihood
Parameters:
estimate Std. Error
shape 3.38170 2.2831073
rate 1.04056 0.7573495
.
.
.
$`2.2.2`
Fitting of the distribution ' gamma ' by maximum likelihood
Parameters:
estimate Std. Error
shape 4.868843 2.5184018
rate 1.549188 0.8441106
OK, your example isn't quite reproducible here, but I think the answer you want will something like the following:
result <- los_data %>%
group_by(group, setting, diagnosis) %>%
do({
fit <- fitdist(.$my_column, "gamma")
data_frame(group=.$group[1], setting=.$setting[1], diagnosis=.$diagnosis[1], fit = list(fit))
}) %>%
ungroup()
This will give you a data frame of all the fits, with columns for group, setting, diagnosis as well as a list-column which contains the fits for each one. Since it is a list column, you will need to use double brackets to extract individual fits. Example:
# Get the fit in the first row
result$fit[[1]]

loop ordinal regression statistical analysis and save the data R

could you, please, help me with a loop? I am relatively new to R.
The short version of the data looks ike this:
sNumber blockNo running TrialNo wordTar wordTar1 Freq Len code code2
1 1 1 5 spouse violent 5011 6 1 2
1 1 1 5 violent spouse 17873 7 2 1
1 1 1 5 spouse aviator 5011 6 1 1
1 1 1 5 aviator wife 515 7 1 1
1 1 1 5 wife aviator 87205 4 1 1
1 1 1 5 aviator spouse 515 7 1 1
1 1 1 9 stability usually 12642 9 1 3
1 1 1 9 usually requires 60074 7 3 4
1 1 1 9 requires client 25949 8 4 1
1 1 1 9 client requires 16964 6 1 4
2 2 1 5 grimy cloth 757 5 2 1
2 2 1 5 cloth eats 8693 5 1 4
2 2 1 5 eats whitens 3494 4 4 4
2 2 1 5 whitens woman 18 7 4 1
2 2 1 5 woman penguin 162541 5 1 1
2 2 1 9 pie customer 8909 3 1 1
2 2 1 9 customer sometimes 13399 8 1 3
2 2 1 9 sometimes reimburses 96341 9 3 4
2 2 1 9 reimburses sometimes 65 10 4 3
2 2 1 9 sometimes gangster 96341 9 3 1
I have a code for ordinal regression analysis for one participant for one trial (eye-tracking data - eyeData) that looks like this:
#------------set the path and import the library-----------------
setwd("/AscTask-3/Data")
library(ordinal)
#-------------read the data----------------
read.delim(file.choose(), header=TRUE) -> eyeData
#-------------extract 1 trial from one participant---------------
ss <- subset(eyeData, sNumber == 6 & runningTrialNo == 21)
#-------------delete duplicates = refixations-----------------
ss.s <- ss[!duplicated(ss$wordTar), ]
#-------------change the raw frequencies to log freq--------------
ss.s$lFreq <- log(ss.s$Freq)
#-------------add a new column with sequential numbers as a factor ------------------
ss.s$rankF <- as.factor(seq(nrow(ss.s)))
#------------ estimate an ordered logistic regression model - fit ordered logit model----------
m <- clm(rankF~lFreq*Len, data=ss.s, link='probit')
summary(m)
#---------------get confidence intervals (CI)------------------
(ci <- confint(m))
#----------odd ratios (OR)--------------
exp(coef(m))
The eyeData file is a huge massive of data consisting of 91832 observations with 11 variables. In total there are 41 participants with 78 trials each. In my code I extract data from one trial from each participant to run the anaysis. However, it takes a long time to run the analysis manually for all trials for all participants. Could you, please, help me to create a loop that will read in all 78 trials from all 41 participants and save the output of statistics (I want to save summary(m), ci, and coef(m)) in one file.
Thank you in advance!
You could generate a unique identifier for every trial of every particpant. Then you could loop over all unique values of this identifier and subset the data accordingly. Then you run the regressions and save the output as a R object
eyeData$uniqueIdent <- paste(eyeData$sNumber, eyeData$runningTrialNo, sep = "-")
uniqueID <- unique(eyeData$uniqueIdent)
for (un in uniqueID) {
ss <- eyeData[eyeData$uniqueID == un,]
ss <- ss[!duplicated(ss$wordTar), ] #maybe do this outside the loop
ss$lFreq <- log(ss$Freq) #you could do this outside the loop too
#create DV
ss$rankF <- as.factor(seq(nrow(ss)))
m <- clm(rankF~lFreq*Len, data=ss, link='probit')
seeSumm <- summary(m)
ci <- confint(m)
oddsR <- exp(coef(m))
save(seeSumm, ci, oddsR, file = paste("toSave_", un, ".Rdata", sep = ""))
# add -un- to the output file to be able identify where it came from
}
Variations of this could include combining the output of every iteration in a list (create an empty list in the beginning) and then after running the estimations and the postestimation commands combine the elements in a list and recursively fill the previously created list "gatherRes":
gatherRes <- vector(mode = "list", length = length(unique(eyeData$uniqueIdent) ##before the loop
gatherRes[[un]] <- list(seeSum, ci, oddsR) ##last line inside the loop
If you're concerned with speed, you could consider writing a function that does all this and use lapply (or mclapply).
Here is a solution using the plyr package (it should be faster than a for loop).
Since you don't provide a reproducible example, I'll use the iris data as an example.
First make a function to calculate your statistics of interest and return them as a list. For example:
# Function to return summary, confidence intervals and coefficients from lm
lm_stats = function(x){
m = lm(Sepal.Width ~ Sepal.Length, data = x)
return(list(summary = summary(m), confint = confint(m), coef = coef(m)))
}
Then use the dlply function, using your variables of interest as grouping
data(iris)
library(plyr) #if not installed do install.packages("plyr")
#Using "Species" as grouping variable
results = dlply(iris, c("Species"), lm_stats)
This will return a list of lists, containing output of summary, confint and coef for each species.
For your specific case, the function could look like (not tested):
ordFit_stats = function(x){
#Remove duplicates
x = x[!duplicated(x$wordTar), ]
# Make log frequencies
x$lFreq <- log(x$Freq)
# Make ranks
x$rankF <- as.factor(seq(nrow(x)))
# Fit model
m <- clm(rankF~lFreq*Len, data=x, link='probit')
# Return list of statistics
return(list(summary = summary(m), confint = confint(m), coef = coef(m)))
}
And then:
results = dlply(eyeData, c("sNumber", "TrialNo"), ordFit_stats)

How to find the final value from repeated measures in R?

I have data arranged like this in R:
indv time mass
1 10 7
2 5 3
1 5 1
2 4 4
2 14 14
1 15 15
where indv is individual in a population. I want to add columns for initial mass (mass_i) and final mass (mass_f). I learned yesterday that I can add a column for initial mass using ddply in plyr:
sorted <- ddply(test, .(indv, time), sort)
sorted2 <- ddply(sorted, .(indv), transform, mass_i = mass[1])
which gives a table like:
indv mass time mass_i
1 1 1 5 1
2 1 7 10 1
3 1 10 15 1
4 2 4 4 4
5 2 3 5 4
6 2 8 14 4
7 2 9 20 4
However, this same method will not work for finding the final mass (mass_f), as I have a different number of observations for each individual. Can anyone suggest a method for finding the final mass, when the number of observations may vary?
You can simply use length(mass) as the index of the last element:
sorted2 <- ddply(sorted, .(indv), transform,
mass_i = mass[1], mass_f = mass[length(mass)])
As suggested by mb3041023 and discussed in the comments below, you can achieve similar results without sorting your data frame:
ddply(test, .(indv), transform,
mass_i = mass[which.min(time)], mass_f = mass[which.max(time)])
Except for the order of rows, this is the same as sorted2.
You can use tail(mass, 1) in place of mass[1].
sorted2 <- ddply(sorted, .(indv), transform, mass_i = head(mass, 1), mass_f=tail(mass, 1))
Once you have this table, it's pretty simple:
t <- tapply(test$mass, test$ind, max)
This will give you an array with ind. as the names and mass_f as the values.

Pull coefficients from a data frame based on information in another data frame

Right now I have two data frames in R, contains some data that looks like this:
> data
p a i
1 1 1 2.2561469
2 5 2 0.2316390
3 2 3 0.4867456
4 3 1 0.1511705
5 4 2 0.8838884
And one the contains coefficients that looks like this:
> coef
3 2 1
1 29420.50 31029.75 29941.96
2 26915.00 27881.00 27050.00
3 27756.00 28904.00 28699.40
4 28345.33 29802.33 28377.56
5 28217.00 29409.00 28738.67
These data frames are connected as each value in data$a corresponds to a column name in coef and data$p corresponds to row names in coef.
I need to apply these coefficients to multiply these coefficients by the values in data$i by matching the row and column names in coef to data$a and data$p.
In other words, for each row in data, I need to use data$a and data$p for each row to pull a specific number from coef that will be multiplied by the value of data$i for that row to create a new vector in data that looks something like this:
> data
p a i z
1 1 1 2.2561469 67553
2 5 2 0.2316390 6812
3 2 3 0.4867456 .
4 3 1 0.1511705 .
5 4 2 0.8838884 .
I was thinking I should create factors in my coef data frame based on the row and column names but am unsure of where to go from there.
Thanks in advance,
Ian
If you order your coef data.frame, you can just index them as though the column names weren't there.
coef <- coef[,order(names(coef))]
Then apply a function to each row:
myfun <- function(x) {
x[3]*coef[x[1], x[2]]
}
data$z <- apply(data, 1, myfun)
> data
p a i z
1 1 1 2.2561469 67553.460
2 5 2 0.2316390 6812.271
3 2 3 0.4867456 13100.758
4 3 1 0.1511705 4338.503
5 4 2 0.8838884 26341.934
>

Data frame "expand" procedure in R?

This is not a real statistical question, but rather a data preparation question before performing the actual statistical analysis. I have a data frame which consists of sparse data. I would like to "expand" this data to include zeroes for missing values, group by group.
Here is an example of the data (a and b are two factors defining the group, t is the sparse timestamp and xis the value):
test <- data.frame(
a=c(1,1,1,1,1,1,1,1,1,1,1),
b=c(1,1,1,1,1,2,2,2,2,2,2),
t=c(0,2,3,4,7,3,4,6,7,8,9),
x=c(1,2,1,2,2,1,1,2,1,1,3))
Assuming I would like to expand the values between t=0 and t=9, this is the result I'm hoping for:
test.expanded <- data.frame(
a=c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1),
b=c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2),
t=c(0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9),
x=c(1,0,2,1,2,0,0,2,0,0,0,0,0,1,1,0,2,1,1,3))
Zeroes have been inserted for all missing values of t. This makes it easier to use.
I have a quick and dirty implementation which sorts the dataframe and loops through each of its lines, adding missing lines one at a time. But I'm not entirely satisfied by the solution. Is there a better way to do it?
For those who are familiar with SAS, it is similar to the proc expand.
Thanks!
As you noted in a comment to the other answer, doing it by group is easy with plyr which just leaves how to "fill in" the data sets. My approach is to use merge.
library("plyr")
test.expanded <- ddply(test, c("a","b"), function(DF) {
DF <- merge(data.frame(t=0:9), DF[,c("t","x")], all.x=TRUE)
DF[is.na(DF$x),"x"] <- 0
DF
})
merge with all.x=TRUE will make the missing values NA, so the second line of the function is needed to replace those NAs with 0's.
This is convoluted but works fine:
test <- data.frame(
a=c(1,1,1,1,1,1,1,1,1,1,1),
b=c(1,1,1,1,1,2,2,2,2,2,2),
t=c(0,2,3,4,7,3,4,6,7,8,9),
x=c(1,2,1,2,2,1,1,2,1,1,3))
my.seq <- seq(0,9)
not.t <- !(my.seq %in% test$t)
test[nrow(test)+seq(length(my.seq[not.t])),"t"] <- my.seq[not.t]
test
#------------
a b t x
1 1 1 0 1
2 1 1 2 2
3 1 1 3 1
4 1 1 4 2
5 1 1 7 2
6 1 2 3 1
7 1 2 4 1
8 1 2 6 2
9 1 2 7 1
10 1 2 8 1
11 1 2 9 3
12 NA NA 1 NA
13 NA NA 5 NA
Not sure if you want it sorted by t afterwards or not. If so, easy enough to do:
https://stackoverflow.com/a/6871968/636656

Resources