Finding all possible sum combinations of given column in R - r

R data frame 1 :
Index
Powervalue
0
1
1
2
2
4
3
8
4
16
5
32
R dataframe 2 :
CombinedValue
20
50
Expected Final Result :
CombinedValue
possiblecodes
20
4, 16
50
2, 16, 32
Can we get the output as in the image. If yes please help.
Please see the image

Here you go.
df <- data.frame(sum = c(50, 20, 6))
values_list <- list()
for (i in 1:nrow(df)) {
sum <- df$sum[i]
values <- c()
while (sum > 0) {
value <- 2^floor(log2(sum))
values <- c(values, value)
sum <- sum - value
}
values_list[[i]] <- values
}
df$values <- values_list
df is now:
sum values
1 50 32, 16, 2
2 20 16, 4
3 6 4, 2

Related

Finding all sum of 2 power value combination values of a given number in R

R data frame 1 :
Index
Powervalue
0
1
1
2
2
4
3
8
4
16
5
32
R dataframe 2 :
CombinedValue
20
50
Expected Final Result :
Can we get the output as in the image. If yes please help.
One of stackoverflow mate provided below code. Am looking how to seperate , values as columns with 1 and 0.
df <- data.frame(sum = c(50, 20, 6))
values_list <- list()
for (i in 1:nrow(df)) {
sum <- df$sum[i]
values <- c()
while (sum > 0) {
value <- 2^floor(log2(sum))
values <- c(values, value)
sum <- sum - value
}
values_list[[i]] <- values
}
df$values <- values_list
Can we fix columns till power 31 as shown in attached image. The columns match with possiblecodes then place 1 and 0 else 0 for the remaining columns. Please help.
Here is a function whose output matches the expected output.
toCodes <- function(x) {
n <- floor(log2(x))
pow <- rev(seq.int(max(n)))
# 'y' is the matrix of codes
y <- t(sapply(x, \(.x) (.x %/% 2^pow) %% 2L))
i_cols <- apply(y, 2, \(.y) any(.y != 0L))
colnames(y) <- sprintf("code_%d", 2^pow)
#
possiblecodes <- apply(y, 1, \(p) {
codes <- 2^pow[as.logical(p)]
paste(rev(codes), collapse = ",")
})
data.frame(combinedvalue = x, possiblecodes, y[, i_cols])
}
x <- c(20L, 50L)
toCodes(x)
#> combinedvalue possiblecodes code_32 code_16 code_4 code_2
#> 1 20 4,16 0 1 1 0
#> 2 50 2,16,32 1 1 0 1
Created on 2022-12-19 with reprex v2.0.2

Cumulative sum in R by group and start over when sum of values in group larger than maximum value

The function below groups values in a vector based on whether the cumulative sum has reached a certain max value and then starts over.
cs_group <- function(x, threshold) {
cumsum <- 0
group <- 1
result <- numeric()
for (i in 1:length(x)) {
cumsum <- cumsum + x[i]
if (cumsum > threshold) {
group <- group + 1
cumsum <- x[i]
}
result = c(result, group)
}
return (result)
}
Example
The max value in the example is 10. The first group only included 9; because summing it with the next value would result in a sum of 12. The next group includes 3, 2, 2 (+8 would result in a value higher then 10).
test <- c(9, 3, 2, 2, 8, 5, 4, 9, 1)
cs_group(test, 10)
[1] 1 2 2 2 3 4 4 5 5
However, I prefer to include in each group the value that results in the cumulative sum to be higher than the maximum value of 10.
Ideal result:
[1] 1 1 2 2 2 3 3 3 4
You can write your own custom function or use the code written by others.
I had the exact same problem few days back and this has been included in the MESS package.
devtools::install_github("ekstroem/MESS")
MESS::cumsumbinning(test, 10, cutwhenpassed = TRUE)
#[1] 1 1 2 2 2 3 3 3 4
One purrr approach could be:
cumsum(c(FALSE, diff(accumulate(test, ~ ifelse(.x >= 10, .y, .x + .y))) <= 0))
[1] 0 0 1 1 1 2 2 2 3
For your purpose, your cs_group can be written like below (if I understand the logic behind in a correct way):
cs_group <- function(x, threshold) {
group <- 1
r <- c()
repeat {
if (length(x)==0) break
cnt <- (idx <- max(which(cumsum(x) <= threshold)))+ifelse(idx==length(x),0,1)
r <- c(r,rep(group, cnt))
x <- x[-(1:cnt)]
group <- group + 1
}
r
}
such that
test <- c(9, 3, 2, 2, 8, 5, 4, 9, 1)
> cs_group(test, 10)
[1] 1 1 2 2 2 3 3 3 4

Calculate mean of specific row pattern

I have a dataframe like this:
V1 = paste0("AB", seq(1:48))
V2 = seq(1:48)
test = data.frame(name = V1, value = V2)
I want to calculate the means of the value-column and specific rows.
The pattern of the rows is pretty complicated:
Rows of MeanA1: 1, 5, 9
Rows of MeanA2: 2, 6, 10
Rows of MeanA3: 3, 7, 11
Rows of MeanA4: 4, 8, 12
Rows of MeanB1: 13, 17, 21
Rows of MeanB2: 14, 18, 22
Rows of MeanB3: 15, 19, 23
Rows of MeanB4: 16, 20, 24
Rows of MeanC1: 25, 29, 33
Rows of MeanC2: 26, 30, 34
Rows of MeanC3: 27, 31, 35
Rows of MeanC4: 28, 32, 36
Rows of MeanD1: 37, 41, 45
Rows of MeanD2: 38, 42, 46
Rows of MeanD3: 39, 43, 47
Rows of MeanD4: 40, 44, 48
As you see its starting at 4 different points (1, 13, 25, 37) then always +4 and for the following 4 means its just stepping 1 more row down.
I would like to have an output of all these means in one list.
Any ideas? NOTE: In this example the mean is of course always the middle number, but my real df is different.
Not quite sure about the output format you require, but the following codes can calculate what you want anyhow.
calc_mean1 <- function(x) mean(test$value[seq(x, by = 4, length.out = 3)])
calc_mean2 <- function(x){sapply(x:(x+3), calc_mean1)}
output <- lapply(seq(1, 37, 12), calc_mean2)
names(output) <- paste0('Mean', LETTERS[seq_along(output)]) # remove this line if more than 26 groups.
output
## $MeanA
## [1] 5 6 7 8
## $MeanB
## [1] 17 18 19 20
## $MeanC
## [1] 29 30 31 32
## $MeanD
## [1] 41 42 43 44
An idea via base R is to create a grouping variable for every 4 rows, split the data every 12 rows (nrow(test) / 4) and aggregate to find the mean, i.e.
test$new = rep(1:4, nrow(test)%/%4)
lapply(split(test, rep(1:4, each = nrow(test) %/% 4)), function(i)
aggregate(value ~ new, i, mean))
# $`1`
# new value
# 1 1 5
# 2 2 6
# 3 3 7
# 4 4 8
# $`2`
# new value
# 1 1 17
# 2 2 18
# 3 3 19
# 4 4 20
# $`3`
# new value
# 1 1 29
# 2 2 30
# 3 3 31
# 4 4 32
# $`4`
# new value
# 1 1 41
# 2 2 42
# 3 3 43
# 4 4 44
And yet another way.
fun <- function(DF, col, step = 4){
run <- nrow(DF)/step^2
res <- lapply(seq_len(step), function(inc){
inx <- seq_len(run*step) + (inc - 1)*run*step
dftmp <- DF[inx, ]
tapply(dftmp[[col]], rep(seq_len(step), run), mean, na.rm = TRUE)
})
names(res) <- sprintf("Mean%s", LETTERS[seq_len(step)])
res
}
fun(test, 2, 4)
#$MeanA
#1 2 3 4
#5 6 7 8
#
#$MeanB
# 1 2 3 4
#17 18 19 20
#
#$MeanC
# 1 2 3 4
#29 30 31 32
#
#$MeanD
# 1 2 3 4
#41 42 43 44
Since you said you wanted a long list of the means, I assumed it could also be a vector where you just have all these values. You would get that like this:
V1 = paste0("AB", seq(1:48))
V2 = seq(1:48)
test = data.frame(name = V1, value = V2)
meanVector <- NULL
for (i in 1:(nrow(test)-8)) {
x <- c(test$value[i], test$value[i+4], test$value[i+8])
m <- mean(x)
meanVector <- c(meanVector, m)
}

Split a vector in R depending on entries

I input a vector vec<-c(2 3 4 8 10 12 15 19 20 23 27 28 39 47 52 60 64 75), and the size of intervals that I want to break the vector entries into.
In this example I want to break this into 9 different vectors based on the size of each entry.
In my case I want vector number 1 to be entries in the interval [1,9], then vector 2 to be entries in [10,18]...ect
In other words:
vec1: 2 3 4 8
vec2: 10 12 15
vec3: 19 20 23 27
ect...
I have tried using the split function but I do not know how to set a ratio that will work.
Maybe the following will do what you want.
f <- cut(vec, seq(0, max(vec), by = 9), include.lowest = TRUE)
sp <- split(vec, f)
sp <- sp[sapply(sp, function(x) length(x) != 0)]
sp
Use integer division %/% to return a vector of which group each value belongs in. Then split into separate vectors. Use (vec-1) to be "inclusive", i.e. 27 goes with group 3, not group 4.
split(vec,(vec-1) %/% 9)
Edit:
Another way using dplyr and cut which explicitly tags each interval
require(dplyr)
vec <- as.data.frame(vec)
df2 %>% mutate(interval = cut(vec,breaks=seq(0,((max(vec) %/% 9) +1) * 9,9),include.lowest=TRUE,right=TRUE))
vec interval
1 2 [0,9]
2 3 [0,9]
3 4 [0,9]
4 8 [0,9]
5 10 (9,18]
6 12 (9,18]
7 15 (9,18]
8 19 (18,27]
9 20 (18,27]
10 23 (18,27]
11 27 (18,27]
maybe this
library(purrr)
vec <- c(2, 3, 4, 8, 10 ,12, 15 ,19, 20, 23, 27, 28, 39, 47, 52, 60, 64, 75)
vec1 <- keep(vec, function(x) x >= 1 & (x) <= 9)
vec2 <- keep(vec, function(x) x >= 10 & (x) <= 18)

R: Find the Variance of all Non-Zero Elements in Each Row

I have a dataframe d like this:
ID Value1 Value2 Value3
1 20 25 0
2 2 0 0
3 15 32 16
4 0 0 0
What I would like to do is calculate the variance for each person (ID), based only on non-zero values, and to return NA where this is not possible.
So for instance, in this example the variance for ID 1 would be var(20, 25),
for ID 2 it would return NA because you can't calculate a variance on just one entry, for ID 3 the var would be var(15, 32, 16) and for ID 4 it would again return NULL because it has no numbers at all to calculate variance on.
How would I go about this? I currently have the following (incomplete) code, but this might not be the best way to go about it:
len=nrow(d)
variances = numeric(len)
for (i in 1:len){
#get all nonzero values in ith row of data into a vector nonzerodat here
currentvar = var(nonzerodat)
Variances[i]=currentvar
}
Note this is a toy example, but the dataset I'm actually working with has over 40 different columns of values to calculate variance on, so something that easily scales would be great.
Data <- data.frame(ID = 1:4, Value1=c(20,2,15,0), Value2=c(25,0,32,0), Value3=c(0,0,16,0))
var_nonzero <- function(x) var(x[!x == 0])
apply(Data[, -1], 1, var_nonzero)
[1] 12.5 NA 91.0 NA
This seems overwrought, but it works, and it gives you back an object with the ids attached to the statistics:
library(reshape2)
library(dplyr)
variances <- df %>%
melt(., id.var = "id") %>%
group_by(id) %>%
summarise(variance = var(value[value!=0]))
Here's the toy data I used to test it:
df <- data.frame(id = seq(4), X1 = c(3, 0, 1, 7), X2 = c(10, 5, 0, 0), X3 = c(4, 6, 0, 0))
> df
id X1 X2 X3
1 1 3 10 4
2 2 0 5 6
3 3 1 0 0
4 4 7 0 0
And here's the result:
id variance
1 1 14.33333
2 2 0.50000
3 3 NA
4 4 NA

Resources