Loop in R to find previous match - r

I need some help to write a loop function in R. I have some problem to select previous match when same id occurs and then write OLD_RANK column and NEW_RANK column.
OLD_RANK must be the NEW_RANK of the previous match found.
`NEW_RANK`<- OLD_RANK+0.05(S1-S2)
Here my data for this example
JUNK<- matrix(c(1,1,10,20,3,2,30,40,1,3,60,4,3,
4,5,40,1,5,10,30,7,6,20,20),ncol=4,byrow=TRUE)
colnames(JUNK) <- c("ID1","DAY","S1","S2")
JUNK<- as.data.frame(JUNK)
What I thought could be a good start:
#subset to find previous match. Find matches before days and if more matches are
#found, choose the row with higher values in `days`
loop for each row
s1 <- subset(s1, DAYS < days)
s1 <- subset(s1, DAYS = max(days))
#if no match fuond JUNK$OLD_RANK<-35 and JUNK$NEW_RANK <-JUNK$OLD_RANK+0.05(S1-S2)
#if previous match is found JUNK$NEW_RANK <-JUNK$OLD_RANK+0.05(S1-S2)
expected result:
ID1 DAYS S1 S2 OLD_RANK NEW_RANK
1 1 10 20 35 34.5
3 2 30 40 35 34.5
1 3 60 4 34.5 37.3
3 4 5 40 34.5 32.75
1 5 10 30 37.3 36.3
7 6 20 20 35 35
Any help is appreciate.

Here's one approach:
library(dplyr)
JUNK2 <- JUNK %>%
group_by(ID1) %>%
mutate(change = 0.05*(S1-S2),
NEW_RANK = 35 + cumsum(change),
OLD_RANK = lag(NEW_RANK) %>% if_else(is.na(.), 35, .)) %>%
ungroup() # EDIT: Added to end with ungrouped table
Result:
JUNK2
# A tibble: 6 x 7
ID1 DAY S1 S2 change NEW_RANK OLD_RANK
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 1 10 20 -0.5 34.5 35
2 3 2 30 40 -0.5 34.5 35
3 1 3 60 4 2.8 37.3 34.5
4 3 4 5 40 -1.75 32.8 34.5
5 1 5 10 30 -1 36.3 37.3
6 7 6 20 20 0 35 35

Related

I want to create a loop with a large dataframe in R

Problem
I want to create a loop from data in df1 it's important the data is taken one ID value at a time.
I'm unsure how this can be done with R.
#original dataset
id=c(1,1,1,2,2,2,3,3,3)
dob=c("11-08","12-04","04-03","10-04","03-07","06-02","12-09","01-01","03-08")
count=c(1,6,3,2,5,6,8,6,4)
outcome=rep(1:0,length.out=9)
df1=data.frame(id,dob,count,outcome)
#changes for each value this needs to be completed separately for each value
df2<-df1[df1$id==1,]
df2<-df2[,-4]
addition<-df2$count+45
df2<-cbind(df2,addition)
df3<-df1[df1$id==2,]
df3<-df3[,-4]
addition<-df3$count+45
df3<-cbind(df3,addition)
df4<-df1[df1$id==3,]
df4<-df4[,-4]
addition<-df4$count+45
df4<-cbind(df4,addition)
df5<-rbind(df2,df3,df4)
Expected Output
df5<-rbind(df2,df3,df4)
1 1 11-08 1 46
2 1 12-04 6 51
3 1 04-03 3 48
4 2 10-04 2 47
5 2 03-07 5 50
6 2 06-02 6 51
7 3 12-09 8 53
8 3 01-01 6 51
9 3 03-08 4 49
In the present context (could be a simplified example) it doesn't even need that to loop, as we can directly add the 'count' with a number
df1$addition <- df1$count + 45
However, if it is a complicated operation and needs to look into the 'id' separately, then do a group_by operation
library(dplyr)
df1 %>%
group_by(id) %>%
mutate(addition = count + 45)
# A tibble: 9 x 5
# Groups: id [3]
# id dob count outcome addition
# <dbl> <fct> <dbl> <int> <dbl>
#1 1 11-08 1 1 46
#2 1 12-04 6 0 51
#3 1 04-03 3 1 48
#4 2 10-04 2 0 47
#5 2 03-07 5 1 50
#6 2 06-02 6 0 51
#7 3 12-09 8 1 53
#8 3 01-01 6 0 51
#9 3 03-08 4 1 49
Also, data.table syntax would be
library(data.table)
setDT(df1)[, addition := count + 45, by = id]
or simply
setDT(df1)[, addition := count + 45]

find first occurrence in two variables in df

I need to find the first two times my df meets a certain condition grouped by two variables. I am trying to use the ddply function, but I am doing something wrong with the ".variables" command.
So in this example, I'm trying to find the first two times x > 30 and y > 30 in each group / trial.
The way I'm using ddply is giving me the first two times in the dataset, then repeating that for every group.
set.seed(1)
df <- data.frame((matrix(nrow=200,ncol=5)))
colnames(df) <- c("group","trial","x","y","hour")
df$group <- rep(c("A","B","C","D"),each=50)
df$trial <- rep(c(rep(1,times=25),rep(2,times=25)),times=4)
df[,3:4] <- runif(400,0,50)
df$hour <- rep(1:25,time=8)
library(plyr)
ddply(.data=df, .variables=c("group","trial"), .fun=function(x) {
i <- which(df$x > 30 & df$y >30 )[1:2]
if (!is.na(i)) x[i, ]
})
Expected results:
group trial x y hour
13 A 1 34.3511423 38.161134 13
15 A 1 38.4920710 40.931734 15
36 A 2 33.4233369 34.481392 11
37 A 2 39.7119930 34.470671 12
52 B 1 43.0604738 46.645491 2
65 B 1 32.5435234 35.123126 15
But instead, my code is finding c(1,4) from the first grouptrial and repeating that over for every grouptrial:
group trial x y hour
1 A 1 34.351142 38.161134 13
2 A 1 38.492071 40.931734 15
3 A 2 5.397181 27.745031 13
4 A 2 20.563721 22.636003 15
5 B 1 22.953286 13.898301 13
6 B 1 32.543523 35.123126 15
I would also like for there to be rows of NA if a second occurrence isn't present in a group*trial.
Thanks,
I think this is what you want:
library(tidyverse)
df %>% group_by(group, trial) %>% filter(x > 30 & y > 30) %>% slice(1:2)
Result:
# A tibble: 16 x 5
# Groups: group, trial [8]
group trial x y hour
<chr> <dbl> <dbl> <dbl> <int>
1 A 1 33.5 46.3 4
2 A 1 32.6 42.7 11
3 A 2 35.9 43.6 4
4 A 2 30.5 42.7 14
5 B 1 33.0 38.1 2
6 B 1 40.5 30.4 7
7 B 2 48.6 33.2 2
8 B 2 34.1 30.9 4
9 C 1 33.0 45.1 1
10 C 1 30.3 36.7 17
11 C 2 44.8 33.9 1
12 C 2 41.5 35.6 6
13 D 1 44.2 34.3 12
14 D 1 39.1 40.0 23
15 D 2 39.4 47.5 4
16 D 2 42.1 40.1 10
(slightly different from your results, probably a different R version)
I reccomend using dplyr or data.table rather than plyr. From the plyr github page:
plyr is retired: this means only changes necessary to keep it on CRAN
will be made. We recommend using dplyr (for data frames) or purrr (for
lists) instead.
Since someone has already provided a solution with dplyr, here is one option with data.table.
In the selection df[i, j, k] I am selecting rows which match your criteria in i, grouping by the given variables in k, and selecting the first two rows (head) of each group-specific subset of the data .SD. All of this inside the brackets is data.table specific, and only works because I converted df to a data.table first with setDT.
library(data.table)
setDT(df)
df[x > 30 & y > 30, head(.SD, 2), by = .(group, trial)]
# group trial x y hour
# 1: A 1 34.35114 38.16113 13
# 2: A 1 38.49207 40.93173 15
# 3: A 2 33.42334 34.48139 11
# 4: A 2 39.71199 34.47067 12
# 5: B 1 43.06047 46.64549 2
# 6: B 1 32.54352 35.12313 15
# 7: B 2 48.03090 38.53685 5
# 8: B 2 32.11441 49.07817 18
# 9: C 1 32.73620 33.68561 1
# 10: C 1 32.00505 31.23571 20
# 11: C 2 32.13977 40.60658 9
# 12: C 2 34.13940 49.47499 16
# 13: D 1 36.18630 34.94123 19
# 14: D 1 42.80658 46.42416 23
# 15: D 2 37.05393 43.24038 3
# 16: D 2 44.32255 32.80812 8
To try a solution that is closer to what you've tried so far we can do the following
ddply(.data=df, .variables=c("group","trial"), .fun=function(df_temp) {
i <- which(df_temp$x > 30 & df_temp$y >30 )[1:2]
df_temp[i, ]
})
Some explanation
One problem with the code that you provided is that you used df inside of ddply. So you defined fun= function(x) but you didn't look for cases of x> 30 & y> 30 in x but in df. Further, your code uses i for x, but i was defined with df. Finally, to my understanding there is no need for if (!is.na(i)) x[i, ]. If there is only one row that meets your condition, you will get a row with NAs anayway, because you use which(df_temp$x > 30 & df_temp$y >30 )[1:2].
Using dplyr, you can also do:
df %>%
group_by(group, trial) %>%
slice(which(x > 30 & y > 30)[1:2])
group trial x y hour
<chr> <dbl> <dbl> <dbl> <int>
1 A 1 34.4 38.2 13
2 A 1 38.5 40.9 15
3 A 2 33.4 34.5 11
4 A 2 39.7 34.5 12
5 B 1 43.1 46.6 2
6 B 1 32.5 35.1 15
7 B 2 48.0 38.5 5
8 B 2 32.1 49.1 18
Since everything else is covered here is a base R version using split
output <- do.call(rbind, lapply(split(df, list(df$group, df$trial)),
function(new_df) new_df[with(new_df, head(which(x > 30 & y > 30), 2)), ]
))
rownames(output) <- NULL
output
# group trial x y hour
#1 A 1 34.351 38.161 13
#2 A 1 38.492 40.932 15
#3 B 1 43.060 46.645 2
#4 B 1 32.544 35.123 15
#5 C 1 32.736 33.686 1
#6 C 1 32.005 31.236 20
#7 D 1 36.186 34.941 19
#8 D 1 42.807 46.424 23
#9 A 2 33.423 34.481 11
#10 A 2 39.712 34.471 12
#11 B 2 48.031 38.537 5
#12 B 2 32.114 49.078 18
#13 C 2 32.140 40.607 9
#14 C 2 34.139 49.475 16
#15 D 2 37.054 43.240 3
#16 D 2 44.323 32.808 8

Use dplyr (I think) to manipulate a dataset

I am giving a data set called ChickWeight. This has the weights of chicks over a time period. I need to introduce a new variable that measures the current weight difference compared to day 0.
I first cleaned the data set and took out only the chicks that were recorded for all 12 weigh ins:
library(datasets)
library(dplyr)
Frequency <- dplyr::count(ChickWeight$Chick)
colnames(Frequency)[colnames(Frequency)=="x"] <- "Chick"
a <- inner_join(ChickWeight, Frequency, by='Chick')
complete <- a[(a$freq == 12),]
head(complete,3)
This data set is in the library(datasets) of r, called ChickWeight.
You can try:
library(dplyr)
ChickWeight %>%
group_by(Chick) %>%
filter(any(Time == 21)) %>%
mutate(wdiff = weight - first(weight))
# A tibble: 540 x 5
# Groups: Chick [45]
weight Time Chick Diet wdiff
<dbl> <dbl> <ord> <fct> <dbl>
1 42 0 1 1 0
2 51 2 1 1 9
3 59 4 1 1 17
4 64 6 1 1 22
5 76 8 1 1 34
6 93 10 1 1 51
7 106 12 1 1 64
8 125 14 1 1 83
9 149 16 1 1 107
10 171 18 1 1 129
# ... with 530 more rows

Better way of binning data in a group in a data frame by equal intervals

I have a dataframe of which is characterized by many different ID's. For every ID there are multiple events which are characterized by the cumulative time duration between events(hours) and the duration of that event(seconds). So, it would look something like:
Id <- c(1,1,1,1,1,1,2,2,2,2,2)
cumulative_time<-c(0,3.58,8.88,11.19,21.86,29.54,0,5,14,19,23)
duration<-c(188,124,706,53,669,1506.2,335,349,395,385,175)
test = data.frame(Id,cumulative_time,duration)
> test
Id cummulative_time duration
1 1 0.00 188.0
2 1 3.58 124.0
3 1 8.88 706.0
4 1 11.19 53.0
5 1 21.86 669.0
6 1 29.54 1506.2
7 2 0.00 335.0
8 2 5.00 349.0
9 2 14.00 395.0
10 2 19.00 385.0
11 2 23.00 175.0
I would like to group by the ID and then restructure the group by sampling by a cumulative amount of every say 10 hours, and in that 10 hours sum by the duration that occurred in the 10 hour interval. The number of bins I want should be from say 0 to 30 hours. Thus were would be 3 bins.
I looked at the cut function and managed to make a hack of it within a dataframe - even me as a new r user I know it isn't pretty
test_cut = test %>%
mutate(bin_durations = cut(test$cummulative_time,breaks = c(0,10,20,30),labels = c("10","20","30"),include.lowest = TRUE)) %>%
group_by(Id,bin_durations) %>%
mutate(total_duration = sum(duration)) %>%
select(Id,bin_durations,total_duration) %>%
distinct()
which gives the output:
test_cut
Id time_bins duration
1 1 10 1018.0
2 1 20 53.0
3 1 30 2175.2
4 2 10 684.0
5 2 20 780.0
6 2 30 175.0
Ultimately I want the interval window and number of bins to be arbitrary - If I have a span of 5000 hours and I want to bin in 1 hour samples. For this I would use breaks=seq(0,5000,1) for the bins I would say labels = as.character(seq(1,5000,1))
This is will also be applied to a very large data frame, so computational speed somewhat desired.
A dplyr solution would be great since I am applying the binning per group.
My guess is there is a nice interaction between cut and perhaps split to generate the desired output.
Thanks in advance.
Update
After testing, I find that even my current implementation isn't quite what I'd like as if I say:
n=3
test_cut = test %>%
mutate(bin_durations = cut(test$cumulative_time,breaks=seq(0,30,n),labels = as.character(seq(n,30,n)),include.lowest = TRUE)) %>%
group_by(Id,bin_durations) %>%
mutate(total_duration = sum(duration)) %>%
select(Id,bin_durations,total_duration) %>%
distinct()
I get
test_cut
# A tibble: 11 x 3
# Groups: Id, bin_durations [11]
Id bin_durations total_duration
<dbl> <fct> <dbl>
1 1 3 188
2 1 6 124
3 1 9 706
4 1 12 53
5 1 24 669
6 1 30 1506.
7 2 3 335
8 2 6 349
9 2 15 395
10 2 21 385
11 2 24 175
Where there are no occurrences in the bin sequence I should just get 0 in the duration column. Rather than an omission.
Thus, it should look like:
test_cut
# A tibble: 11 x 3
# Groups: Id, bin_durations [11]
Id bin_durations total_duration
<dbl> <fct> <dbl>
1 1 3 188
2 1 6 124
3 1 9 706
4 1 12 53
5 1 15 0
6 1 18 0
7 1 21 0
8 1 24 669
9 1 27 0
10 1 30 1506.
11 2 3 335
12 2 6 349
13 2 9 0
14 2 12 0
15 2 15 395
16 2 18 0
17 2 21 385
18 2 24 175
19 2 27 0
20 2 30 0
Here is one idea via integer division (%/%)
library(tidyverse)
test %>%
group_by(Id, grp = cumulative_time %/% 10) %>%
summarise(toatal_duration = sum(duration))
which gives,
# A tibble: 6 x 3
# Groups: Id [?]
Id grp toatal_duration
<dbl> <dbl> <dbl>
1 1 0 1018
2 1 1 53
3 1 2 2175.
4 2 0 684
5 2 1 780
6 2 2 175
To address your updated issue, we can use complete in order to add the missing rows. So, for the same example, binning in hours of 3,
test %>%
group_by(Id, grp = cumulative_time %/% 3) %>%
summarise(toatal_duration = sum(duration)) %>%
ungroup() %>%
complete(Id, grp = seq(min(grp), max(grp)), fill = list(toatal_duration = 0))
which gives,
# A tibble: 20 x 3
Id grp toatal_duration
<dbl> <dbl> <dbl>
1 1 0 188
2 1 1 124
3 1 2 706
4 1 3 53
5 1 4 0
6 1 5 0
7 1 6 0
8 1 7 669
9 1 8 0
10 1 9 1506.
11 2 0 335
12 2 1 349
13 2 2 0
14 2 3 0
15 2 4 395
16 2 5 0
17 2 6 385
18 2 7 175
19 2 8 0
20 2 9 0
We could make these changes:
test$cummulative_time can be simply cumulative_time
breaks could be factored out and then used in the cut as shown
the second mutate could be changed to summarize in which case the select and distinct are not needed
it is always a good idea to close any group_by with a matching ungroup or in the case of summarize we can use .groups = "drop")
add complete to insert 0 for levels not present
Implementing these changes we have:
library(dplyr)
library(tidyr)
breaks <- seq(0, 40, 10)
test %>%
mutate(bin_durations = cut(cumulative_time, breaks = breaks,
labels = breaks[-1], include.lowest = TRUE)) %>%
group_by(Id,bin_durations) %>%
summarize(total_duration = sum(duration), .groups = "drop") %>%
complete(Id, bin_durations, fill = list(total_duration = 0))
giving:
# A tibble: 8 x 3
Id bin_durations total_duration
<dbl> <fct> <dbl>
1 1 10 1018
2 1 20 53
3 1 30 2175.
4 1 40 0
5 2 10 684
6 2 20 780
7 2 30 175
8 2 40 0

How to place not available values with previous nearest with respect to corresponding value

Hi i am having data frame ,how to replace NA values in "Val_1" with respect to nearest value of Val_2
for e.g Val_1 at ID -4 value is missing and corresponding value of Val_2 is "33.3" we need to replace with nearest value in Val_2 i.e 45 (previous nearest value is 45) also ID-8 with 33 (nearest value of 44.6 is 44.5)
ID Date Val_1 Val_2
1 01-02-2014 NA 22
2 02-02-2014 23 NA
3 03-02-2014 45 33
4 04-02-2014 NA 33.3
5 05-02-2014 45 46
6 06-02-2014 33 44.5
7 07-02-2014 56 48
8 08-02-2014 NA 44.6
9 09-02-2014 10 43
10 10-02-2014 14 56
11 11-02-2014 NA NA
12 12-02-2014 22 22
we can replace NA value by
library(zoo)
na.locf(na.locf(DF$Val_1), fromLast = TRUE)
but above code replace with previous value from the same column
o/p :
ID Date Val_1 Val_2
1 01-02-2014 NA 22
2 02-02-2014 23 NA
3 03-02-2014 45 33
4 04-02-2014 45 33.3
5 05-02-2014 45 46
6 06-02-2014 33 44.5
7 07-02-2014 56 48
8 08-02-2014 33 44.6
9 09-02-2014 10 43
10 10-02-2014 14 56
11 11-02-2014 NA NA
12 12-02-2014 22 22
Thanks
Sorry but I couldn't think of any simpler way:
# To use pipes
library(dplyr)
# Give a threshold. Nearest values must have a difference below this threshold
diff.threshold <- 0.5
# Create a vector with IDs that must have Val_1 updated
IDtoReplace <- DF %>% filter(is.na(Val_1), !is.na(Val_2)) %>%
select(ID) %>%
unlist()
for (id in IDtoReplace){
# Get Val_2 from current id
curVal2 <- DF %>% filter(ID==id) %>% select(Val_2) %>% unlist()
# Get value to be input
valuetoinput <- DF %>% filter(!is.na(Val_1),!is.na(Val_2),ID < id) %>% # Filter out all NA values and keep only previous ID
mutate(diff = abs(Val_2-curVal2)) %>% # Calculate all the differentes
filter(diff==min(diff),diff<=diff.threshold) %>% # Keep row with minimum difference (it has to be below the threshold)
select(Val_1) %>% # Select Val_1
unlist()
# If any value is found, replace it in the data frame
if(length(valuetoinput)>0)
DF[which(DF$ID==id),"Val_1"] <- valuetoinput
}
And as result:
> DF
ID Date Val_1 Val_2
1 1 01-02-2014 NA 22.0
2 2 02-02-2014 23 NA
3 3 03-02-2014 45 33.0
4 4 04-02-2014 45 33.3
5 5 05-02-2014 45 46.0
6 6 06-02-2014 33 44.5
7 7 07-02-2014 56 48.0
8 8 08-02-2014 33 44.6
9 9 09-02-2014 10 43.0
10 10 10-02-2014 14 56.0
11 11 11-02-2014 NA NA
12 12 12-02-2014 22 22.0
Will you use something similar very often? If yes, I suggest you to rewrite the for loop as a function.

Resources