I am trying to figure out a way to do this in R, ideally with something in the apply() family of functions (i.e. not with a for loop).
I want to use a function based on four other columns in my data frame and I want to save the results of that function in three new columns of the data frame.
For example if I have (with test data):
x <- c("var1","var2","var3","var4")
A_x <- c(5,4,3,2)
A_notx <- c(5,6,7,8)
B_x <- c(10,10,5,15)
B_notx <- c(10,10,15,5)
example <- data.frame(A_x,A_notx,B_x,B_notx)
rownames(example) <- x
A_x A_notx B_x B_notx
var1 5 5 10 10
var2 4 6 10 10
var3 3 7 5 15
var4 2 8 15 5
And I want to use oddsratio() from the epitools library on these counts, how could I save the odds ratio as well as the upper and lower bounds as 3 new columns? I would like example$odds, example$upper, and example$lower to exist in my dataframe.
I have messed around a bit with apply() and and by() but can't seem to figure it out. With apply it changes the structure of the row from data frame to matrix, and it is outside of the scope of the function to set column values within the function. Perhaps this whole thing is better served by a list object than a data frame? In the end I want to have all the information on hand (counts, statistics, etc.) for a given variable name, with a variable in each column.
Perhaps this is what you're looking for?
example <- cbind(example,
t(apply(example,1,function(x){
oddsratio(as.table(rbind(x[1:2],x[3:4])))$measure[2,]
}
)))
example
A_x A_notx B_x B_notx estimate lower upper
var1 5 5 10 10 0.99999998 0.20537812 4.8690679
var2 4 6 10 10 0.68116864 0.13043731 3.2586139
var3 3 7 5 15 1.28836297 0.20019246 7.2563905
var4 2 8 15 5 0.09603445 0.01039156 0.5446693
Related
I have a dataframe that I want to take only the values of one row, for all columns (as a numeric vector). One way of doing that would be df_trasposed = t(df), and then I can just take the wanted column with df_trasposed$column
I feel there is a better way of doing it, without creating a new data frame and taking more memory. I tried something like t(df)$column but this won't work obviously.
How can this be done?
Try this
as.numeric(df['rowname',])
Data frames are special types of lists, consisting of vectors of equal lengths. So we can treat it as lists and extract the nth element of each vector, where n is the row number of your data frame. Example:
df
# X1 X2 X3
# 1 1 5 9
# 2 2 6 10
# 3 3 7 11
# 4 4 8 12
sapply(df, `[`, 3)
# X1 X2 X3
# 3 7 11
You can wrap an unname(.) around it to delete element names, but this probably creates another copy in memory and actually is just cosmetics.
Data:
df <- data.frame(matrix(1:12, 4, 3))
The ms.compute function in the RxnSim package requires me to specify two variables I want to compare. I am not sure how I can define the variables (i.e. ms.compute(x,y)) in the ms.compute function so that the function uses whatever is in the columns in the current row, and make the function repeat for each row, and ideally output the result in a third column.
df
X1 X2
1 1 6
2 2 7
3 3 8
4 4 9
5 5 10
For this df, I would want a third column X3 to return the result of a function like: FUN(x+y), in which x,y are 1,6 for row 1 etc. and have this function repeat for each row, so row 1 = 7, row 2 = 9 etc.
Using dplyr might be easier
df <- df%>%
mutate(sum = x1+x2)
Let this be my data set:
df <- data.frame(x1 = runif(1000), x2 = runif(1000), x3 = runif(1000),
split = sample( c('SPLITMEHERE', 'OBS'), 1000, replace=TRUE, prob=c(0.04, 0.96) ))
So, I have some variables (in my case, 15), and criteria by which I want to split the data.frame into multiple data.frames.
My criteria is the following: each other time the 'SPLITMEHERE' appears I want to take all the values, or all 'OBS' below it and get a data.frame from just these observations. So, if there's 20 'SPLITMEHERE's in starting data.frame, I want to end up with 10 data.frames in the end.
I know it sounds confusing and like it doesn't have much sense, but this is the result from extracting the raw numbers from an awfully dirty .txt file to obtain meaningful data. Basically, every 'SPLITMEHERE' denotes the new table in this .txt file, but each county is divided into two tables, so I want one table (data.frame) for each county.
In the hope I will make it more clear, here is the example of exactly what I need. Let's say the first 20 observations are:
x1 x2 x3 split
1 0.307379064 0.400526799 0.2898194543 SPLITMEHERE
2 0.465236674 0.915204924 0.5168274657 OBS
3 0.063814420 0.110380201 0.9564822116 OBS
4 0.401881416 0.581895095 0.9443995396 OBS
5 0.495227871 0.054014926 0.9059893533 SPLITMEHERE
6 0.091463620 0.945452614 0.9677482590 OBS
7 0.876123151 0.702328031 0.9739113525 OBS
8 0.413120761 0.441159673 0.4725571219 OBS
9 0.117764512 0.390644966 0.3511555807 OBS
10 0.576699384 0.416279417 0.8961428872 OBS
11 0.854786077 0.164332814 0.1609375612 OBS
12 0.336853841 0.794020157 0.0647337821 SPLITMEHERE
13 0.122690541 0.700047133 0.9701538396 OBS
14 0.733926139 0.785366852 0.8938749305 OBS
15 0.520766503 0.616765349 0.5136788010 OBS
16 0.628549288 0.027319848 0.4509875809 OBS
17 0.944188977 0.913900539 0.3767973795 OBS
18 0.723421337 0.446724318 0.0925365961 OBS
19 0.758001243 0.530991725 0.3916394396 SPLITMEHERE
20 0.888036748 0.862066601 0.6501050976 OBS
What I would like to get is this:
data.frame1:
1 0.465236674 0.915204924 0.5168274657 OBS
2 0.063814420 0.110380201 0.9564822116 OBS
3 0.401881416 0.581895095 0.9443995396 OBS
4 0.091463620 0.945452614 0.9677482590 OBS
5 0.876123151 0.702328031 0.9739113525 OBS
6 0.413120761 0.441159673 0.4725571219 OBS
7 0.117764512 0.390644966 0.3511555807 OBS
8 0.576699384 0.416279417 0.8961428872 OBS
9 0.854786077 0.164332814 0.1609375612 OBS
And
data.frame2:
1 0.122690541 0.700047133 0.9701538396 OBS
2 0.733926139 0.785366852 0.8938749305 OBS
3 0.520766503 0.616765349 0.5136788010 OBS
4 0.628549288 0.027319848 0.4509875809 OBS
5 0.944188977 0.913900539 0.3767973795 OBS
6 0.723421337 0.446724318 0.0925365961 OBS
7 0.888036748 0.862066601 0.6501050976 OBS
Therefore, split column only shows me where to split, data in columns where 'SPLITMEHERE' is written is meaningless. But, this is no bother, as I can delete this rows later, the point is in separating multiple data.frames based on this criteria.
Obviously, just the split() function and filter() from dplyr wouldn't suffice here. The real problem is that the lines which are supposed to separate the data.frames (i.e. every other 'SPLITMEHERE') do not appear in regular fashion, but just like in my above example. Once there is a gap of 3 lines, and other times it could be 10 or 15 lines.
Is there any way to extract this efficiently in R?
The hardest part of the problem is creating the groups. Once we have the proper groupings, it's easy enough to use a split to get your result.
With that said, you can use a cumsum for the groups. Here I divide the cumsum by 2 and use a ceiling so that any groups of 2 SPLITMEHERE's will be collapsed into one. I also use an ifelse to exclude the rows with SPLITMEHERE:
df$group <- ifelse(df$split != "SPLITMEHERE", ceiling(cumsum(df$split=="SPLITMEHERE")/2), 0)
res <- split(df, df$group)
The result is a list with a dataframe for each group. The groups with 0 are ones you want throw out.
Friends, I am new to R, however, I am stuck in one of the problems. Thing is, I have a column in dataframe with math functions (mean, min, max etc), and I have another data frame with say same number of rows as functions and I want to apply these specific functions to the data frame.
Below is df with specific math functions
var1 funct
1 A min
2 B max
3 C mean
4 D min
Below if df to which these functions need to be applied (on rows)
a1 b1 c1 d1
1 4 8 12 15
2 NA 9 13 16
3 6 10 NA 17
4 7 11 15 18
Suppose the 1st fn needs to be applied to the first row and so on. Can anyone help with this? I have tried do.call, parse(eval) however I failed. Note there was NA, however, I want the results for each row (i.e. exclude NA, but not delete the row itself)
Regards,
Try this:
df2 <- read.csv(textConnection("a1,b1,c1,d1
4,8,12,15
NA,9,13,16
6,10,NA,17
7,11,15,18"))
df1 <- read.csv(textConnection("var1,funct
A,min
B,max
C,mean
D,min"))
df1$funct <- as.character( df1$funct)
x <- mapply(do.call,
as.list(df1$funct),
lapply(split(df2,seq(nrow(df2))),
function(x)list(na.omit(unlist(x)))))
(The lapply(split(...),list)) is because do.call requires that it's second argument is a list.)
Calling your first data frame with functions df1 and the data frame with data in rows df2 then,
mapply(function(f,x) get(f)(x,na.rm=TRUE), df1$funct, as.data.frame(t(df2)))
will produce
min max mean min
4 16 11 7
I have a .csv file with several columns, but I am only interested in two of the columns(TIME and USER). The USER column consists of the value markers 1 or 2 in chunks and the TIME column consists of a value in seconds. I want to calculate the difference between the TIME value of the first 2 in a chunk in the USER column and the first 1 in a chunk in the USER column. I want to accomplish this through R. It would be ideal for their to be another column added to my data file with these differences.
So far I have only imported the .csv into R.
Latency <- read.csv("/Users/alinazjoo/Documents/Latency_allgaze.csv")
I'm going to guess your data looks like this
# sample data
set.seed(15)
rr<-sample(1:4, 10, replace=T)
dd<-data.frame(
user=rep(1:5, each=10),
marker=rep(rep(1:2,10), c(rbind(rr, 5-rr))),
time=1:50
)
Then you can calculate the difference using the base function aggregate and transform. Observe
namin<-function(...) min(..., na.rm=T)
dx<-transform(aggregate(
cbind(m2=ifelse(marker==2,time,NA), m1=ifelse(marker==1, time,NA)) ~ user,
dd, namin, na.action=na.pass),
diff = m2-m1)
dx
# user m2 m1 diff
# 1 1 4 1 3
# 2 2 15 11 4
# 3 3 23 21 2
# 4 4 35 31 4
# 5 5 44 41 3
We use aggregate to find the minimal time for each of the two kinds or markers, then we use transform to calculate the difference between them.