R Data.Table: Dynamically Update a Different Column for each Row - r

I'm working on some code where I need to find the maximum value over a set of columns and then update that maximum value. Consider this toy example:
test <- data.table(thing1=c('AAA','BBB','CCC','DDD','EEE'),
A=c(9,5,4,2,5),
B=c(2,7,2,6,3),
C=c(6,2,5,4,1),
ttl=c(1,1,3,2,1))
where the resulting data.table looks like this:
thing1
A
B
C
ttl
AAA
9
2
6
1
BBB
5
7
2
1
CCC
4
2
5
3
DDD
2
6
4
2
EEE
5
3
1
1
The goal is to find the column (A, B, or C) with the maximum value and replace that value by the current value minus 0.1 times the value in the ttl column (i.e. new_value=old_value - 0.1*ttl). The other columns (not containing the maximum value) should remain the same. The resulting DT should look like this:
thing1
A
B
C
ttl
AAA
8.9
2
6
1
BBB
5
6.9
2
1
CCC
4
2
4.7
3
DDD
2
5.8
4
2
EEE
4.9
3
1
1
The "obvious" way of doing this is to write a for loop and loop through each row of the DT. That's easy enough to do and is what the code I'm adapting this from did. However, the real DT is much larger than my toy example and the for loop takes some time to run, which is why I'm trying to adapt the code to take advantage of vectorization and get rid of the loop.
Here's what I have so far:
test[,max_position:=names(.SD)[apply(.SD,1,function(x) which.max(x))],.SDcols=(2:4)]
test[,newmax:=get(max_position)-ttl*.1,by=1:nrow(test)]
which produces this DT:
thing1
A
B
C
ttl
max_position
newmax
AAA
9
2
6
1
A
8.9
BBB
5
7
2
1
B
6.9
CCC
4
2
5
3
C
4.7
DDD
2
6
4
2
B
5.8
EEE
5
3
1
1
A
4.9
The problem comes in assigning the value of the newmax column back to where it needs to go. I naively tried this, along with some other things, which tells me that "'max_position' not found":
test[,(max_position):=newmax,by=1:nrow(test)]
It's straightforward to solve the problem by reshaping the DT, which is the solution I have in place for now (see below), but I worry that with my full DT two reshapes will be slow as well (though presumably better than the for loop). Any suggestions on how to make this work as intended?
Reshaping solution, for reference:
test[,max_position:=names(.SD)[apply(.SD,1,function(x) which.max(x))],.SDcols=(2:4)]
test[,newmax:=get(max_position)-ttl*.1,by=1:nrow(test)]
test <- setDT(gather(test,idgroup,val,c(A,B,C)))
test[,maxval:=max(val),by='thing1']
test[val==maxval,val:=newmax][,maxval:=NULL]
test <- setDT(spread(test,idgroup,val))

With the OP's code, replace can work
test[, (2:4) := replace(.SD, which.max(.SD), max(.SD, na.rm = TRUE) - 0.1 * ttl),
by = 1:nrow(test),.SDcols = 2:4]
-output
> test
thing1 A B C ttl
1: AAA 8.9 2.0 6.0 1
2: BBB 5.0 6.9 2.0 1
3: CCC 4.0 2.0 4.7 3
4: DDD 2.0 5.8 4.0 2
5: EEE 4.9 3.0 1.0 1
In base R, this may be faster with row/column indexing
test1 <- as.data.frame(test)
m1 <- cbind(seq_len(nrow(test1)), max.col(test1[2:4], "first"))
test1[2:4][m1] <- test1[2:4][m1] - 0.1 * test1$ttl

Related

Remove groups based on multiple conditions in dplyr R

I have a data that looks like this
gene=c("A","A","A","A","B","B","B","B")
frequency=c(1,1,0.8,0.6,0.3,0.2,1,1)
time=c(1,2,3,4,1,2,3,4)
df <- data.frame(gene,frequency,time)
gene frequency time
1 A 1.0 1
2 A 1.0 2
3 A 0.8 3
4 A 0.6 4
5 B 0.3 1
6 B 0.2 2
7 B 1.0 3
8 B 1.0 4
I want to remove each a gene group, in this case A or B when they have
frequency > 0.9 at time==1
In this case I want to remove A and my data to look like this
gene frequency time
1 B 0.3 1
2 B 0.2 2
3 B 1.0 3
4 B 1.0 4
Any hint or help are appreciated
We may use subset from base R i.e. create a logical vector with multiple expressions extract the 'gene' correspond to that, use %in% to create a logical vector, negate (!) to return the genes that are not. Or may also change the > to <= and remove the !
subset(df, !gene %in% gene[frequency > 0.9 & time == 1])
-ouptut
gene frequency time
5 B 0.3 1
6 B 0.2 2
7 B 1.0 3
8 B 1.0 4

Change Column Random Numbers To Serial

In a data set, there is a specific column that as random values which repeat at regular interval. I want to replace these with increasing values as explained below.
Column_B has random data
Column_A Column_B
1.5 0
0.2 1
0.3 5
4.5 6
12.5 7
1.6 0
7.8 1
1.8 5
6.9 6
11.0 7
After transformation Column_B should have
Column_A Column_B
1.5 0
0.2 1
0.3 2
4.5 3
12.5 4
1.6 0
7.8 1
1.8 2
6.9 3
11.0 4
Is there a faster way to do this rather than creating a new column and then replacing it with Column_B? Thanks.
You can use recycling to fill the column with a repeating sequence. for example, if you want the sequence to be 64 long before repeating then you can use
DF$column_B <- 0:(64 - 1L)
More generally, for patterns like your example in which each element within the repeating sequence is distinct, you can find how long the sequence is, using which, then do the same thing
seq.length = which(dt$B == dt$B[1L])[2L] - 1L
dt$B = 0:(seq.length - 1L)
We group by cumulative sum of 'Column_B' where elements are 0 (or where there is decrease in the next element) and get the sequence of roww to assign it to 'Column_B'
library(data.table)
setDT(df1)[, Column_B := as.integer(seq_len(.N)-1), cumsum(Column_B==0)]
df1
# Column_A Column_B
# 1: 1.5 0
# 2: 0.2 1
# 3: 0.3 2
# 4: 4.5 3
# 5: 12.5 4
# 6: 1.6 0
# 7: 7.8 1
# 8: 1.8 2
# 9: 6.9 3
#10: 11.0 4
Or find the difference between adjacent elements in 'Column_B', get the cumulative sum based on that to create the group_by variable
setDT(df1)[, Column_B := as.integer(seq_len(.N)-1), cumsum(c(TRUE, diff(Column_B)< 0))]

Return a Data Frame with Only the Max Values

Say I have this data frame, lesson, with 3 columns (User, Course, Score), which looks something like:
User Course Score
A 1.1 9
A 1.1 8
B 1.2 7
Only it has a lot more data. If I want to get a data frame that only has the highest scores for each course by each user, how would I go about doing that?
I tried:
lesson<-lesson[order(lesson$User,lesson$Course,-lesson$User),]
and then
lesson[!duplicated(lesson$User && lesson$Course),]
but I got an error back.
DF <- read.table(text="User Course Score
A 1.1 9
A 1.1 8
B 1.1 1
B 1.2 7",header=TRUE)
aggregate(Score~Course*User,data=DF,FUN=max)
# Course User Score
#1 1.1 A 9
#2 1.1 B 1
#3 1.2 B 7
or you might want to try plyr package
library(plyr)
ddply(DF,.(User,Course),transform,maxScore=max(Score,na.rm=TRUE))
User Course Score maxScore
A 1.1 9 9
A 1.1 8 9
B 1.1 1 1
B 1.2 7 7
or if you want to see the max score only
ddply(DF,.(User,Course),summarise,maxScore=max(Score,na.rm=TRUE))
User Course maxScore
A 1.1 9
B 1.1 1
B 1.2 7

Reordering and reshaping columns in R [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to sort a dataframe by column(s) in R
I have a dataset that looks like this:
x y z
1. 1 0.2
1.1 1 1.5
1.2 1 3.
1. 2 8.1
1.1 2 1.0
1.2 2 0.6
What I would like is organise the dataset first as a function of x in increasing order then as a function of y such that
x y z
1. 1 0.2
1. 2 8.1
1.1 1 1.5
1.1 2 1.
1.2 1 3.
1.2 2 0.6
I know that apply, mapply, tapply, etc functions reorganise datasets but I must admit that I don't really understand the differences between them nor do I really understand how to apply which and when.
Thank you for your suggestions.
You can order your data using the order function. There is no need for any apply family function.
Assuming your data is in a data.frame called df:
df[order(df$x, df$y), ]
x y z
1 1.0 1 0.2
4 1.0 2 8.1
2 1.1 1 1.5
5 1.1 2 1.0
3 1.2 1 3.0
6 1.2 2 0.6
See ?order for more help.
On a side note: reshaping in general refers to changing the shape of a data.frame, e.g. converting it from wide to tall format. This is not what is required here.
You can also use the arrange() function in plyr for this. Wrap the variables in desc() that you want to sort the other direction.
> library(plyr)
> dat <- head(ChickWeight)
> arrange(dat,weight,Time)
weight Time Chick Diet
1 42 0 1 1
2 51 2 1 1
3 59 4 1 1
4 64 6 1 1
5 76 8 1 1
6 93 10 1 1
This is the fastest way to do this that's still readable, if speed matters in your application. Benchmarks here:
How to sort a dataframe by column(s)?

Using melt / cast with variables of uneven length in R

I'm working with a large data frame that I want to pivot, so that variables in a column become rows across the top.
I've found the reshape package very useful in such cases, except that the cast function defaults to fun.aggregate=length. Presumably this is because I'm performing these operations by "case" and the number of variables measured varies among cases.
I would like to pivot so that missing variables are denoted as "NA"s in the pivoted data frame.
So, in other words, I want to go from a molten data frame like this:
Case | Variable | Value
1 1 2.3
1 2 2.1
1 3 1.3
2 1 4.3
2 2 2.5
3 1 1.8
3 2 1.9
3 3 2.3
3 4 2.2
To something like this:
Case | Variable 1 | Variable 2 | Variable 3 | Variable 4
1 2.3 2.1 1.3 NA
2 4.3 2.5 NA NA
3 1.8 1.9 2.3 2.2
The code dcast(data,...~Variable) again defaults to fun.aggregate=length, which does not preserve the original values.
Thanks for your help, and let me know if anything is unclear!
It is just a matter of including all of the variables in the cast call. Reshape expects the Value column to be called value, so it throws a warning, but still works fine. The reason that it was using fun.aggregate=length is because of the missing Case in the formula. It was aggregating over the values in Case.
Try: cast(data, Case~Variable)
data <- data.frame(Case=c(1,1,1,2,2,3,3,3,3),
Variable=c(1,2,3,1,2,1,2,3,4),
Value=c(2.3,2.1,1.3,4.3,2.5,1.8,1.9,2.3,2.2))
cast(data,Case~Variable)
Using Value as value column. Use the value argument to cast to override this choice
Case 1 2 3 4
1 1 2.3 2.1 1.3 NA
2 2 4.3 2.5 NA NA
3 3 1.8 1.9 2.3 2.2
Edit: as a response to the comment from #Jon. What do you do if there is one more variable in the data frame?
data <- data.frame(expt=c(1,1,1,1,2,2,2,2,2),
func=c(1,1,1,2,2,3,3,3,3),
variable=c(1,2,3,1,2,1,2,3,4),
value=c(2.3,2.1,1.3,4.3,2.5,1.8,1.9,2.3,2.2))
cast(data,expt+variable~func)
expt variable 1 2 3
1 1 1 2.3 4.3 NA
2 1 2 2.1 NA NA
3 1 3 1.3 NA NA
4 2 1 NA NA 1.8
5 2 2 NA 2.5 1.9
6 2 3 NA NA 2.3
7 2 4 NA NA 2.2
Here is one solution. It does not use the package or function you mention, but it could be of use. Suppose your data frame is called df:
M <- matrix(NA,
nrow = length(unique(df$Case)),
ncol = length(unique(df$Variable))+1,
dimnames = list(NULL,c('Case',paste('Variable',sort(unique(df$Variable))))))
irow <- match(df$Case,unique(df$Case))
icol <- match(df$Variable,unique(df$Variable)) + 1
ientry <- irow + (icol-1)*nrow(M)
M[ientry] <- df$Value
M[,1] <- unique(df$Case)
To avoid the warning message, you could subset the data frame according to another variable, i.e a categorical variable having three levels a,b,c. Because in you current data for category a it has 70 cases, for b 80 cases, c has 90. Then the cast function doesn't know how to aggregate them.
Hope this helps.

Resources