I have a data frame with two columns in R and I want to create a third column that will roll by 2 in both columns and check if a condition is satisfied or not as described in the table below.
The condition is a rolling ifelse and goes like this :
IF -A1<B3<A1 TRUE ELSE FALSE
IF -A2<B4<A2 TRUE ELSE FALSE
IF -A3<B5<A3 TRUE ELSE FALSE
IF -A4<B6<A4 TRUE ELSE FALSE
A
B
CHECK
1
4
NA
2
5
NA
3
6
FALSE
4
1
TRUE
5
-4
FALSE
6
1
TRUE
How can I do it in R? Is there a base R's function or within the dplyr framework ?
Since R is vectorized, you can do that with one command, using for instance dplyr::lag:
library(dplyr)
df %>%
mutate(CHECK = -lag(A, n=2) < B & lag(A, n=2) > B)
A B CHECK
1 1 4 NA
2 2 5 NA
3 3 6 FALSE
4 4 1 TRUE
5 5 -4 FALSE
6 6 1 TRUE
Related
This question already has answers here:
Finding ALL duplicate rows, including "elements with smaller subscripts"
(9 answers)
Closed 4 years ago.
I have one question in R.
I have the following example code for a question.
> exdata <- data.frame(a = rep(1:4, each = 3),
+ b = c(1, 1, 2, 4, 5, 3, 3, 2, 3, 9, 9, 9))
> exdata
a b
1 1 1
2 1 1
3 1 2
4 2 4
5 2 5
6 2 3
7 3 3
8 3 2
9 3 3
10 4 9
11 4 9
12 4 9
> exdata[duplicated(exdata), ]
a b
2 1 1
9 3 3
11 4 9
12 4 9
I tried to use the duplicated() function to find all the duplicate records in the exdata dataframe, but it only finds a part of the duplicated records, so it is difficult to confirm intuitively whether duplicates exist.
I'm looking for a solution that returns the following results
a b
1 1 1
2 1 1
7 3 3
9 3 3
10 4 9
11 4 9
12 4 9
Can use the duplicated() function to find the right solution?
Or is there a way to use another function?
I would appreciate your help.
duplicated returns a logical vector with the length equal to the length of its argument, corresponding to the second time a value exists. It has a method for data frames, duplicated.data.frame, that looks for duplicated rows (and so has a logical vector of length nrow(exdata). Your extraction using that as a logical vector is going to return exactly those rows that have occurred once before. It WON'T however, return the first occurence of those rows.
Look at the index vector your using:
duplicated(exdata)
# [1] FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE TRUE TRUE
But you can combine it with fromLast = TRUE to get all of the occurrences of these rows:
exdata[duplicated(exdata) | duplicated(exdata, fromLast = TRUE),]
# a b
# 1 1 1
# 2 1 1
# 7 3 3
# 9 3 3
# 10 4 9
# 11 4 9
# 12 4 9
look at the logical vector for duplicated(exdata, fromLast = TRUE) , and the combination with duplicated(exdata) to convince yourself:
duplicated(exdata, fromLast = TRUE)
# [1] TRUE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE TRUE FALSE
duplicated(exdata) | duplicated(exdata, fromLast = TRUE)
# [1] TRUE TRUE FALSE FALSE FALSE FALSE TRUE FALSE TRUE TRUE TRUE TRUE
I'm relatively new to R and struggle with "vectorizing" all my code in R. Even though I appreciate that's the proper way to do it.
I need to set a value in a dataframe to be the minimum time for the IDs.
ID isTrue RealTime MinTime
1 TRUE 16
1 FALSE 8
1 TRUE 10
2 TRUE 7
2 TRUE 30
3 FALSE 3
To be turned into:
ID isTrue RealTime MinTime
1 TRUE 16 10
1 FALSE 8
1 TRUE 10 10
2 TRUE 7 7
2 TRUE 30 7
3 FALSE 3
The following works perfectly. However, it takes 10 minutes to run which isn't ideal:
for (i in 1:nrow(df)){
if (df[i,'isTrue']) {
prevTime <- sqldf(paste('Select min(MinTime) from dfStageIV where ID =',df[i,'ID'],sep=" "))[1,1]
if (is.na(prevTime) | is.na(df[i,'MinTime']) | df[i,'MinTime'] < prevTime){
df[i,'MinTime']<-dfStageIV[i,'RealTime']
} else {
dfStageIV[i,'MinTime']<-prevTime
}
}
}
How should I do this properly? I take it using for or do loops are not the best way in R. I've been looking at the apply() and aggregate.data.frame() functions but can't make sense of how to do this. Can someone point me in the right direction? Much appreciated!!
Here is a two line base R solution using ave, pmax, and is.na.
# calculate minimum for each ID, excluding FALSE instances
df$MinTime <- ave(pmax(df$RealTime, (!df$isTrue) * max(df$RealTime)), df$ID, FUN=min)
# turn FALSE instances into NA
is.na(df$MinTime) <- (!df$isTrue)
which returns
df
ID isTrue RealTime MinTime
1 1 TRUE 16 10
2 1 FALSE 8 NA
3 1 TRUE 10 10
4 2 TRUE 7 7
5 2 TRUE 30 7
6 3 FALSE 3 NA
In the first line, pmax is used to construct a vector of the observations if df$isTrue is TRUE or the maximum RealTime value in the data.frame. This new vector is used in the minimum calculation. The FALSE values are set to NA in the second line.
data
df <- read.table(header=T, text="ID isTrue RealTime
1 TRUE 16
1 FALSE 8
1 TRUE 10
2 TRUE 7
2 TRUE 30
3 FALSE 3")
It should be far faster with a dplyr chain. Here we group the data frame by both ID and group and get the minima at the group level. Then we can ungroup it again and simply remove the F minima.
library(dplyr)
df %>%
group_by(ID, isTrue) %>%
mutate(Min.all = min(RealTime)) %>%
ungroup() %>%
transmute(ID, isTrue, RealTime, MinTime = ifelse(isTrue == T, Min.all, ""))
Output:
# A tibble: 6 × 4
ID isTrue RealTime MinTime
<int> <lgl> <int> <chr>
1 1 TRUE 16 10
2 1 FALSE 8
3 1 TRUE 10 10
4 2 TRUE 7 7
5 2 TRUE 30 7
6 3 FALSE 3
I'd really recommend you get familiar with dplyr if you're going to be doing lots of data frame manipulation.
Someone suggested using the ave() function and the following works and is fast although it returns a ton of warnings:
df$MinTime<-ave(df$RealTime,df$ID, df$isTrue, FUN = min)
df$MinTime<-ifelse(df$isTrue, df$MinTime,NA).
The code in the question could be simplified by doing it all in SQL or all in R (appropriately vectorized) rather than half and half. There are already some R solutions so here is an SQL solution that shows that the problem amounts to aggregating a custom self-join.
library(sqldf)
sqldf("select a.*, min(b.RealTime) minRealTime
from df a
left join df b on a.ID = b.ID and a.isTRUE and b.isTRUE
group by a.rowid")
giving:
ID isTrue RealTime minRealTime
1 1 TRUE 16 10
2 1 FALSE 8 NA
3 1 TRUE 10 10
4 2 TRUE 7 7
5 2 TRUE 30 7
6 3 FALSE 3 NA
I have a dataframe with two columns. I want to add a new colume to df where all the values are inside, were the dataframe matches with the first colume.
I tried:
df<-data.frame(A=c("1","test","2","3",NA,"Test", NA),B=c("1","No Match","No Match","3",NA,"Test", "No Match"))
df[df$A == df$B ]
However, I get:
Error in Ops.factor(df$A, df$B) : level sets of factors are different
Any recommednation what I am doing wrong?
Dealing with NA first and then add your column:
> df[is.na(df)]=""
> df$New = with(df, A==B)
> df
A B New
1 1 1 TRUE
2 test No Match FALSE
3 2 No Match FALSE
4 3 3 TRUE
5 TRUE
6 Test Test TRUE
7 No Match FALSE
Or remove NA from your initial data.frame with df = df[complete.cases(df),] and then add the column.
If you really want to have False when there is NA in A or B column:
> transform(df, New=ifelse(is.na(A)|is.na(B), FALSE, df$A==df$B))
A B New
1 1 1 TRUE
2 test No Match FALSE
3 2 No Match FALSE
4 3 3 TRUE
5 <NA> <NA> FALSE
6 Test Test TRUE
7 <NA> No Match FALSE
I have nested data that looks like this:
ID Date Behavior
1 1 FALSE
1 2 FALSE
1 3 TRUE
2 3 FALSE
2 5 FALSE
2 6 TRUE
2 7 FALSE
3 1 FALSE
3 2 TRUE
I'd like to create a column called counter in which for each unique ID the counter adds one to the next row until the Behavior = TRUE
I am expecting this result:
ID Date Behavior counter
1 1 FALSE 1
1 2 FALSE 2
1 3 TRUE 3
2 3 FALSE 1
2 5 FALSE 2
2 6 TRUE 3
2 7 FALSE
3 1 FALSE 1
3 2 TRUE 2
Ultimately, I would like to pull the minimum counter in which the observation occurs for each unique ID. However, I'm having trouble developing a solution for this current counter issue.
Any and all help is greatly appreciated!
I'd like to create a counter within each array of unique IDs and from there, ultimately pull the row level info - the question is how long on average does it take to reach a TRUE
I sense there might an XY problem going on here. You can answer your latter question directly, like so:
> library(plyr)
> mean(daply(d, .(ID), function(grp)min(which(grp$Behavior))))
[1] 2.666667
(where d is your data frame.)
Here's a dplyr solution that finds the row number for each TRUE in each ID:
library(dplyr)
newdf <- yourdataframe %>%
group_by(ID) %>%
summarise(
ftrue = which(Behavior))
do.call(rbind, by(df, list(df$ID), function(x) {n = nrow(x); data.frame(x, Counter = c(1:(m<-which(x$Behavior)), rep(NA, n-m)))}))
ID Date Behavior Counter
1.1 1 1 FALSE 1
1.2 1 2 FALSE 2
1.3 1 3 TRUE 3
2.4 2 3 FALSE 1
2.5 2 5 FALSE 2
2.6 2 6 TRUE 3
2.7 2 7 FALSE NA
3.8 3 1 FALSE 1
3.9 3 2 TRUE 2
df = read.table(text = "ID Date Behavior
1 1 FALSE
1 2 FALSE
1 3 TRUE
2 3 FALSE
2 5 FALSE
2 6 TRUE
2 7 FALSE
3 1 FALSE
3 2 TRUE", header = T)
With data as such below, I'm trying to reassign any of the test cols (test_A, etc.) to their corresponding time cols (time_A, etc.) if the test is true, and then find the minimum of all true test times.
[ID] [time_A] [time_B] [time_C] [test_A] [test_B] [test_C] [min_true_time]
[1,] 1 2 3 4 FALSE TRUE FALSE ?
[2,] 2 -4 5 6 TRUE TRUE FALSE ?
[3,] 3 6 1 -2 TRUE TRUE TRUE ?
[4,] 4 -2 3 4 TRUE FALSE FALSE ?
My actual data set is quite large so my attempts at if and for loops have failed miserably. But I can't make any progress on an apply function.
And more negative time, say -2 would be considered the minimum for row 3.
Any suggestions are welcomed gladly
You don't give much information, but I think this does what you need. No idea if it is efficient enough, since you don't say how big your dataset actually is.
#I assume your data is in a data.frame:
df <- read.table(text="ID time_A time_B time_C test_A test_B test_C
1 1 2 3 4 FALSE TRUE FALSE
2 2 -4 5 6 TRUE TRUE FALSE
3 3 6 1 -2 TRUE TRUE TRUE
4 4 -2 3 4 TRUE FALSE FALSE")
#loop over all rows and subset column 2:4 with column 5:7, then take the mins
df$min_true_time <- sapply(1:nrow(df), function(i) min(df[i,2:4][unlist(df[i,5:7])]))
df
# ID time_A time_B time_C test_A test_B test_C min_true_time
#1 1 2 3 4 FALSE TRUE FALSE 3
#2 2 -4 5 6 TRUE TRUE FALSE -4
#3 3 6 1 -2 TRUE TRUE TRUE -2
#4 4 -2 3 4 TRUE FALSE FALSE -2
Another way, which might be faster (I'm not in the mood for benchmarking):
m <- as.matrix(df[,2:4])
m[!df[,5:7]] <- NA
df$min_true_time <- apply(m,1,min,na.rm=TRUE)