I have a data frame, the data frame is already sorted as needed, but now I will like to "slice it" in groups.
This groups should have a max cumulative value of 10. When the cumulative value is > 10, it should reset the cumulative sum and start over again
library(dplyr)
id <- sample(1:15)
order <- 1:15
value <- c(4, 5, 7, 3, 8, 1, 2, 5, 3, 6, 2, 6, 3, 1, 4)
df <- data.frame(id, order, value)
df
This is the output I'm looking for(I did it "manually")
cumsum_10 <- c(4, 9, 7, 10, 8, 9, 2, 7, 10, 6, 8, 6, 9, 10, 4)
group_10 <- c(1, 1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 6, 6, 6, 7)
df1 <- data.frame(df, cumsum_10, group_10)
df1
So I'm having 2 problems
How to create a cumulative variable that resets everytime it passes an upper limit (10 in this case)
How to count/group every group
For the first part I was trying some combinations of group_by and cumsum with no luck
df1 <- df %>% group_by(cumsum(c(False, value < 10)))
I would prefer a pipe (%>%) solution instead of a for loop
Thanks
I think this is not easily vectorizable.... at least i do not know how.
You can do it by hand via:
my_cumsum <- function(x){
grp = integer(length(x))
grp[1] = 1
for(i in 2:length(x)){
if(x[i-1] + x[i] <= 10){
grp[i] = grp[i-1]
x[i] = x[i-1] + x[i]
} else {
grp[i] = grp[i-1] + 1
}
}
data.frame(grp, x)
}
For your data this gives:
> my_cumsum(df$value)
grp x
1 1 4
2 1 9
3 2 7
4 2 10
5 3 8
6 3 9
7 4 2
8 4 7
9 4 10
10 5 6
11 5 8
12 6 6
13 6 9
14 6 10
15 7 4
Also for my "counter-example" this gives:
> my_cumsum(c(10,6,4))
grp x
1 1 10
2 2 6
3 2 10
As #Khashaa pointed out this can be implementet more efficiently via Rcpp. He linked to this answer How to speed up or vectorize a for loop? which i find very useful
You could define your own function and then use it inside dplyr's mutate statement as follows:
df %>% group_by() %>%
mutate(
cumsum_10 = cumsum_with_reset(value, 10),
group_10 = cumsum_with_reset_group(value, 10)
) %>%
ungroup()
The cumsum_with_reset() function takes a column and a threshold value which resets the sum. cumsum_with_reset_group() is similar but identifies rows that have been grouped together. Definitions are as follows:
# group rows based on cumsum with reset
cumsum_with_reset_group <- function(x, threshold) {
cumsum <- 0
group <- 1
result <- numeric()
for (i in 1:length(x)) {
cumsum <- cumsum + x[i]
if (cumsum > threshold) {
group <- group + 1
cumsum <- x[i]
}
result = c(result, group)
}
return (result)
}
# cumsum with reset
cumsum_with_reset <- function(x, threshold) {
cumsum <- 0
group <- 1
result <- numeric()
for (i in 1:length(x)) {
cumsum <- cumsum + x[i]
if (cumsum > threshold) {
group <- group + 1
cumsum <- x[i]
}
result = c(result, cumsum)
}
return (result)
}
# use functions above as window functions inside mutate statement
df %>% group_by() %>%
mutate(
cumsum_10 = cumsum_with_reset(value, 10),
group_10 = cumsum_with_reset_group(value, 10)
) %>%
ungroup()
This can be done easily with purrr::accumulate
library(dplyr)
library(purrr)
df %>% mutate(cumsum_10 = accumulate(value, ~ifelse(.x + .y <= 10, .x + .y, .y)),
group_10 = cumsum(value == cumsum_10))
id order value cumsum_10 group_10
1 8 1 4 4 1
2 13 2 5 9 1
3 7 3 7 7 2
4 1 4 3 10 2
5 4 5 8 8 3
6 10 6 1 9 3
7 12 7 2 2 4
8 2 8 5 7 4
9 15 9 3 10 4
10 11 10 6 6 5
11 14 11 2 8 5
12 3 12 6 6 6
13 5 13 3 9 6
14 9 14 1 10 6
15 6 15 4 4 7
We can take advantage of the function cumsumbinning, from the package MESS, that performs this task:
library(MESS)
df %>%
group_by(group_10 = cumsumbinning(value, 10)) %>%
mutate(cumsum_10 = cumsum(value))
Output
# A tibble: 15 x 5
# Groups: group_10 [7]
id order value group_10 cumsum_10
<int> <int> <dbl> <int> <dbl>
1 6 1 4 1 4
2 10 2 5 1 9
3 1 3 7 2 7
4 5 4 3 2 10
5 3 5 8 3 8
6 9 6 1 3 9
7 14 7 2 4 2
8 11 8 5 4 7
9 15 9 3 4 10
10 8 10 6 5 6
11 12 11 2 5 8
12 2 12 6 6 6
13 4 13 3 6 9
14 7 14 1 6 10
15 13 15 4 7 4
The function below uses recursion to construct a vector with the lengths of each group. It is faster than a loop for small data vectors (length less than about a hundred values), but slower for longer ones. It takes three arguments:
1) vec: A vector of values that we want to group.
2) i: The index of the starting position in vec.
3) glv: A vector of group lengths. This is the return value, but we need to initialize it and pass it along through each recursion.
# Group a vector based on consecutive values with a cumulative sum <= 10
gf = function(vec, i, glv) {
## Break out of the recursion when we get to the last group
if (sum(vec[i:length(vec)]) <= 10) {
glv = c(glv, length(i:length(vec)))
return(glv)
}
## Keep recursion going if there are at least two groups left
# Calculate length of current group
gl = sum(cumsum(vec[i:length(vec)]) <= 10)
# Append to previous group lengths
glv.append = c(glv, gl)
# Call function recursively
gf(vec, i + gl, glv.append)
}
Run the function to return a vector of group lengths:
group_vec = gf(df$value, 1, numeric(0))
[1] 2 2 2 3 2 3 1
To add a column to df with the group lengths, use rep:
df$group10 = rep(1:length(group_vec), group_vec)
In its current form the function will only work on vectors that don't have any values greater than 10, and the grouping by sums <= 10 is hard-coded. The function can of course be generalized to deal with these limitations.
The function can be speeded up somewhat by doing cumulative sums that look ahead only a certain number of values, rather than the remaining length of the vector. For example, if the values are always positive, you only need to look ten values ahead, since you'll never need to sum more than ten numbers to reach a value of 10. This too can be generalized for any target value. Even with this modification, the function is still slower than a loop for a vector with more than about a hundred values.
I haven't worked with recursive functions in R before and would be interested in any comments and suggestions on whether recursion makes sense for this type of problem and whether it can be improved, especially execution speed.
Related
I have a dataframe that stores adjacency relations. I want to divide numbers into different groups according to this dataframe. The dataframe are as follows:
df = data.frame(from=c(1,1,2,2,2,3,3,3,4,4,4,5,5), to=c(1,3,2,3,4,1,2,3,2,4,5,4,5))
df
from to
1 1 1
2 1 3
3 2 2
4 2 3
5 2 4
6 3 1
7 3 2
8 3 3
9 4 2
10 4 4
11 4 5
12 5 4
13 5 5
In above dataframe, number 1 has links with number 1 and 3, number 2 has links with number 2, 3, 4, so number 1 can not be in same group with number 3 and number 2 can not be in same group with number 3 and number 4. In the end, groups can be c(1, 2, 5) and c(3, 4).
I wonder how to program it?
First replace the values of to with NA when from and to are equal.
df2 <- transform(df, to = replace(to, from == to, NA))
Then recursively bind each row of the data if from of the latter row has not appeared in to of the former rows.
Reduce(function(x, y) {
if(y$from %in% x$to) x else rbind(x, y)
}, split(df2, 1:nrow(df2)))
# from to
# 1 1 NA
# 2 1 3
# 3 2 NA
# 4 2 3
# 5 2 4
# 12 5 4
# 13 5 NA
Finally, you could extract unique elements for the both columns to get the two groups.
The overall pipeline should be
df |>
transform(to = replace(to, from == to, NA)) |>
(\(dat) split(dat, 1:nrow(dat)))() |>
Reduce(f = \(x, y) if(y$from %in% x$to) x else rbind(x, y))
The answer of Darren Tsai has solved this problem, but with some flaw.
Following is a very clumsy solution:
df = data.frame(from=c(1,1,2,2,2,3,3,3,4,4,4,5,5), to=c(1,3,2,3,4,1,2,3,2,4,5,4,5))
df.list = lapply(split(df,df$from), function(x){
x$to
})
group.idx = rep(1, length(unique(df$from)))
for (i in seq_along(df.list)) {
df.vec <- df.list[[i]]
curr.group = group.idx[i]
remain.vec = setdiff(df.vec, i)
for (j in remain.vec) {
if(group.idx[j] == curr.group){
group.idx[j] = curr.group + 1
}
}
}
group.idx
[1] 1 1 2 2 1
I want to write a function in R that receives any data set as input, such that the data set has some missing points (NA). Now I want to use mean function to replace some numbers/values for missing points (NA) in the data set. What I am thinking is a function like this:
x<function(data,type=c("mean", lag=2))
Indeed, it should compute the mean of the two numbers later and two numbers before of the missing point (because I considered lag as 2 in the function). For example, if the missing point is in place 12th then the function should compute the mean of the numbers in places 10th, 11th, 13th, and 14th and substitute the result for the missing point at place 12th. In particular cases, for example, if the missing point is in the last place, and we do not have two numbers later, the function should compute the mean of all the data of the corresponding column and substitute for the missing point. Here I give an example to make it clear. Consider the following data set:
3 7 8 0 8 12 2
5 8 9 2 8 9 1
1 2 4 5 0 6 7
5 6 0 NA 3 9 10
7 2 3 6 11 14 2
4 8 7 4 5 3 NA
In the above data set, the first NA should be replaced with the mean of numbers 2, 5 (two data before), and 6 and 4 (two data after) which is (2+5+6+4)/4 is equal to 17/4. And the last NA should be replaced with the mean of the last column which is (2+1+7+10+2)/5 is equal to 22/5.
My question is how can I add some codes (if, if-else, or other loops) to the above function to make a complete function to satisfy the above explanations. I should highlight that I want to use the family of apply functions.
First we can define a function that smooths a single vector:
library(dplyr)
smooth = function(vec, n=2){
# Lead and lag the vector twice in both directions
purrr::map(1:n, function(i){
cbind(
lead(vec, i),
lag(vec, i)
)
}) %>%
# Bind the matrix together
do.call(cbind, .) %>%
# Take the mean of each row, ie the smoothed version at each position
# If there are NAs in the mean, it will itself be NA
rowMeans() %>%
# In order, take a) original values b) locally smoothed values
# c) globally smoothed values (ie the entire mean ignoring NAs)
coalesce(vec, ., mean(vec, na.rm=TRUE))
}
> smooth(c(0, 2, 5, NA, 6, 4))
[1] 0.00 2.00 5.00 4.25 6.00 4.00
> smooth(c(2, 1, 7, 10, 2, NA))
[1] 2.0 1.0 7.0 10.0 2.0 4.4
Then we can apply it to each column:
> c(3, 7, 8, 0, 8, 12, 2, 5, 8, 9, 2, 8, 9, 1, 1, 2, 4, 5, 0, 6, 7, 5, 6, 0, NA, 3, 9, 10, 7, 2, 3, 6, 11, 14, 2, 4, 8, 7, 4, 5, 3, NA) %>%
matrix(byrow=TRUE, ncol=7) %>%
as_tibble(.name_repair="universal") %>%
mutate(across(everything(), smooth))
# A tibble: 6 × 7
...1 ...2 ...3 ...4 ...5 ...6 ...7
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 3 7 8 0 8 12 2
2 5 8 9 2 8 9 1
3 1 2 4 5 0 6 7
4 5 6 0 4.25 3 9 10
5 7 2 3 6 11 14 2
6 4 8 7 4 5 3 4.4
Please find below one solution using the data.table library.
Reprex
Your data:
m1 <- "3 7 8 0 8 12 2
5 8 9 2 8 9 1
1 2 4 5 0 6 7
5 6 0 NA 3 9 10
7 2 3 6 11 14 2
4 8 7 4 5 3 NA"
myData<- read.table(text=m1,h=F)
Code for the function replaceNA
library(data.table)
replaceNA <- function(data){
setDT(data)
# Create a data.table identifying rows and cols indexes of NA values in the data.table
NA_DT <- as.data.table(which(is.na(data), arr.ind=TRUE))
# Select row and column indexes of NAs that are not at the last row in the data.table
NA_not_Last <- NA_DT[row < nrow(data)]
# Select row and column indexes of NA that is at the last row in the data.table
NA_Last <- NA_DT[row == nrow(data)]
# Create a vector of column names where NA values are not at the last row in the data.table
Cols_NA_not_Last <- colnames(data)[NA_not_Last[,col]]
# Create a vector of column names where NA values are at the last row in the data.table
Cols_NA_Last <- colnames(data)[NA_Last[,col]]
# Replace NA values that are not at the last row in the data.table by the mean of the values located
# in the two previous lines and the two following lines of the line containing the NA value
data[, (Cols_NA_not_Last) := lapply(.SD, function(x) replace(x, which(is.na(x)), mean(c(x[which(is.na(x))-2], x[which(is.na(x))-1], x[which(is.na(x))+1], x[which(is.na(x))+2]), na.rm = TRUE))), .SDcols = Cols_NA_not_Last][]
# Replace NA values that are at the last row in the data.table by the mean of all the values in the column where the NA value is found
data[, (Cols_NA_Last) := lapply(.SD, function(x) replace(x, which(is.na(x)), mean(x, na.rm = TRUE))), .SDcols = Cols_NA_Last][]
return(data)
}
Test of the function with your data
replaceNA(myData)
#> V1 V2 V3 V4 V5 V6 V7
#> 1: 3 7 8 0.00 8 12 2.0
#> 2: 5 8 9 2.00 8 9 1.0
#> 3: 1 2 4 5.00 0 6 7.0
#> 4: 5 6 0 4.25 3 9 10.0
#> 5: 7 2 3 6.00 11 14 2.0
#> 6: 4 8 7 4.00 5 3 4.4
Created on 2021-11-08 by the reprex package (v2.0.1)
I am trying to find a faster way of accomplishing the following code since my actual dataset is very large. I would like to get rid of the for loop altogether. I am trying to duplicate each row in xdf into a new data frame based on the number of columns in values. Then, next to each entry in the new dataset, show the row sums from column 1 in values up to the column j.
xdf <- data_frame(
x = c('a', 'b', 'c'),
y = c(4, 5, 6),
)
values <- data_frame(
col_1 = c(5, 9, 1),
col_2 = c(4, 7, 6),
col_3 = c(1, 5, 2),
col_4 = c(7, 8, 5)
)
for (j in seq(ncol(values))){
if (j==1){
Temp <- cbind(xdf, z= rowSums(values[1:j]))
}
else{
Temp <- rbind(Temp, cbind(xdf, z= rowSums(values[1:j])))
}
}
print(Temp)
The output should be:
x y z
1 a 4 5
2 b 5 9
3 c 6 1
4 a 4 9
5 b 5 16
6 c 6 7
7 a 4 10
8 b 5 21
9 c 6 9
10 a 4 17
11 b 5 29
12 c 6 14
Is there a shorter way to accomplish this?
This is the closest answer that I could get on SO.
How to expand data frame based on values?
I am new to R, so sorry for the longwinded code.
Here's one base R option :
Repeat the rows in xdf as there are number of columns in values, iteratively increment one column at a time to find rowSums and add it as a new column in the final dataframe.
newdf <- xdf[rep(seq(nrow(xdf)), ncol(values)), ]
newdf$z <- c(sapply(seq(ncol(values)), function(x) rowSums(values[1:x])))
newdf
# A tibble: 12 x 3
# x y z
# <chr> <dbl> <dbl>
# 1 a 4 5
# 2 b 5 9
# 3 c 6 1
# 4 a 4 9
# 5 b 5 16
# 6 c 6 7
# 7 a 4 10
# 8 b 5 21
# 9 c 6 9
#10 a 4 17
#11 b 5 29
#12 c 6 14
A concise one-liner as suggested by #sindri_baldur doesn't require repeating the rows explicitly.
cbind(xdf, z = c(sapply(seq(ncol(values)), function(x) rowSums(values[1:x]))))
Sample data containing some arithmetic sequences c(4,5,6) and c(10,11).
df <- data.frame(x = c(2, 4, 5, 6, 8, 10, 11))
What I want it is a new column that count the length of the each sequence, such as
> df
x cnt
1 2 1
2 4 1
3 5 2
4 6 3
5 8 1
6 10 1
7 11 2
It would be simple to first assign df$cnt[1] = 1, then for the second row and beyond just increment the count, or reset to 1 depending on if the consecutive numbers in df$x meet certain criteria (here x[i] - x[i-1] == 1). I am just not sure loop is the way to go in R-- also I need to deal with groups.
I can create new column to check if it is in a sequence. From there, I probably can use rle to calculate the run length and generate the cnt column (not sure how to do it with the NA).
> df %>% mutate(check=(x-lag(x)==1))
x check
1 2 NA
2 4 FALSE
3 5 TRUE
4 6 TRUE
5 8 FALSE
6 10 FALSE
7 11 TRUE
Is this the way to go? Please suggest solutions with dplyr or data.table?
dplyr. Set the default value and it will work:
df %>% mutate(check = x - lag(x, default = x[1L]) != 1) %>%
group_by(g = cumsum(check)) %>%
mutate(cnt = row_number()) %>%
ungroup %>% select(-g,-check)
x cnt
<dbl> <int>
1 2 1
2 4 1
3 5 2
4 6 3
5 8 1
6 10 1
7 11 2
data.table. Along the same lines and more concisely:
library(data.table)
setDT(df)
df[, cnt := 1:.N, by=cumsum(x != shift(x, fill=x[1L]) + 1L)]
x cnt
1: 2 1
2: 4 1
3: 5 2
4: 6 3
5: 8 1
6: 10 1
7: 11 2
shift is data.table's analogue to lag.
Alternately, from v1.9.7 of the package on, you're able to use rowid instead:
df[, cnt := rowid(cumsum(x != shift(x, fill=x[1L]) + 1L))]
Another option using base R
unlist(sapply(rle(cumsum(ifelse(diff(c(df$x[1],df$x))!=1,1,0)))$lengths,seq_len))
I am running into an issue with my data where I want to take the first observed ob score score for each individual id and subtract that from that last observed score.
The problem with asking for the first observation minus the last observation is that sometimes the first observation data is missing.
Is there anyway to ask for the first observed score for each individual, thus skipping any missing data?
I built the below df to illustrate my problem.
help <- data.frame(id = c(5,5,5,5,5,12,12,12,17,17,20,20,20),
ob = c(1,2,3,4,5,1,2,3,1,2,1,2,3),
score = c(NA, 2, 3, 4, 3, 7, 3, 4, 3, 4, NA, 1, 4))
id ob score
1 5 1 NA
2 5 2 2
3 5 3 3
4 5 4 4
5 5 5 3
6 12 1 7
7 12 2 3
8 12 3 4
9 17 1 3
10 17 2 4
11 20 1 NA
12 20 2 1
13 20 3 4
And what I am hoping to run is code that will give me...
id ob score es
1 5 1 NA -1
2 5 2 2 -1
3 5 3 3 -1
4 5 4 4 -1
5 5 5 3 -1
6 12 1 7 3
7 12 2 3 3
8 12 3 4 3
9 17 1 3 -1
10 17 2 4 -1
11 20 1 NA -3
12 20 2 1 -3
13 20 3 4 -3
I am attempting to work out of dplyr and I understand the use of the 'group_by' command, however, not sure how to 'select' only first observed scores and then mutate to create es.
I would use first() and last() (both dplyr function) and na.omit() (from the default stats package.
First, I would make sure your score column was a numberic column with proper NA values (not strings as in your example)
help <- data.frame(id = c(5,5,5,5,5,12,12,12,17,17,20,20,20),
ob = c(1,2,3,4,5,1,2,3,1,2,1,2,3),
score = c(NA, 2, 3, 4, 3, 7, 3, 4, 3, 4, NA, 1, 4))
then you can do
library(dplyr)
help %>% group_by(id) %>% arrange(ob) %>%
mutate(es=first(na.omit(score)-last(na.omit(score))))
library(dplyr)
temp <- help %>% group_by(id) %>%
arrange(ob) %>%
filter(!is.na(score)) %>%
mutate(es = first(score) - last(score)) %>%
select(id, es) %>%
distinct()
help %>% left_join(temp)
This solution is a little verbose, only b/c it relies on a couple of helper functions FIRST and LAST:
# The position (indicator) of the first value that evaluates to TRUE.
LAST <- function (x, none = NA) {
out <- FIRST(reverse(x), none = none)
if (identical(none, out)) {
return(none)
}
else {
return(length(x) - out + 1)
}
}
# The position (indicator) of the last value that evaluates to TRUE.
FIRST <- function (x, none = NA)
{
x[is.na(x)] <- FALSE
if (any(x))
return(which.max(x))
else return(none)
}
# returns the difference between the first and last non-missing values
diff2 <- function(x)
x[LAST(!is.na(x))] - x[FIRST(!is.na(x))]
library(dplyr)
help %>%
group_by(id) %>%
arrange(ob) %>%
summarise(diff = diff2(score))