I have a numeric vector x of length N and would like to create a vector of the within-set sums of all of the following sets: any possible combination of the x elements with at most M elements in each combination. I put together a slow iterative approach; what I am looking for here is a way without using any loops.
Consider the approach I have been taking, in the following example with N=5 and M=4
M <- 4
x <- 11:15
y <- as.matrix(expand.grid(rep(list(0:1), length(x))))
result <- y[rowSums(y) <= M, ] %*% x
However, as N gets large (above 22 for me), the expand.grid output becomes too big and gives an error (replace x above with x <- 11:55 to observe this). Ideally there would be an expand.grid function that permits restrictions on the rows before constructing the full matrix, which (at least for what I want) would keep the matrix size within memory limits.
Is there a way to achieve this without causing problems for large N?
Your problem has to do with the sheer amount of combinations.
What you appear to be doing is listing all different combinations of 0's and 1's in a sequence of length of x.
In your example x has length 5 and you have 2^5=32 combinations
When x has length 22 you have 2^22=4194304 combinations.
Couldn't you use a binary encoding instead?
In your case that would mean
0 stands for 00000
1 stands for 00001
2 stands for 00010
3 stands for 00011
...
It will not solve your problem completely, but you should be able to get a bit further than now.
Try this:
c(0, unlist(lapply(1:M, function(k) colSums(combn(x, k)))))
It generates the same result as with your expand.grid approach, shown below for the test data.
M <- 4
x <- 11:15
# expand.grid approach
y <- as.matrix(expand.grid(rep(list(0:1), length(x))))
result <- y[rowSums(y) <= M, ] %*% x
# combn approach
result1 <- c(0, unlist(lapply(1:M, function(k) colSums(combn(x, k)))))
all(sort(result[,1]) == sort(result1))
# [1] TRUE
This should be fast (it takes 0.227577 secs on my machine, with N=22, M=4):
x <- 1:22 # N = 22
M <- 4
c(0, unlist(lapply(1:M, function(k) colSums(combn(x, k)))))
# [1] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 3 4 5 6 7
you may want to choose the unique values of the sums with
unique(c(0, unlist(lapply(1:M, function(k) colSums(combn(x, k))))))
Related
I have a function f that needs to be applied to a single column of length n in segments of m length, where m divides n. (For example, to a column of 1000 values, apply f to the first 250 values, then to 250-500, ...).
A loop is overkill, since the column has over 16 million values. I was thinking the efficient way would be to separate the column of length n into q vectors of length m, where mq = n. Then I could apply f simultaneously to all this vectors using some lapply-like functionality. Then I cold join the q vectors to obtain the transformed version of the column.
Is that the efficient way to go here? If so, what function could decompose a column into q vectors of equal length and what function should I use to broadcast f across the q vectors?
Lastly, although less importantly, what if we wanted to do this to several columns and not just one?
Context
I've programmed a function that computes the power spectrum of an EEG signal (a numeric vector). However, it is bad practice to compute the power spectrum of a whole signal at once. The correct method is to compute it epoch by epoch, in 30 or 5 second segments, and average the spectrum of all those epochs. Hence why I need to apply a function to a column (an EEG signal) by epochs (or segments).
A way to do it is to create an auxiliar variable, so you can apply to each variable, depending on your function you can use group_by and/or summarize, an example:
df <- data.frame(
x = rnorm(15),
y = rnorm(15),
z = rnorm(15)
)
library(dplyr)
df %>%
mutate(
aux = rep(1:3,each = (nrow(df)/3)),
across(.cols = c(x,y,z),.fns = ~ . + 2 * aux)
)
x y z aux
1 2.164841 2.882465 2.139098 1
2 2.364115 2.205598 2.410275 1
3 2.552158 1.383564 1.441543 1
4 1.398107 1.265201 2.605371 1
5 1.006301 1.868197 1.493666 1
6 5.026785 4.310017 2.579434 2
7 4.751061 2.960320 4.127993 2
8 2.490833 3.815691 5.945851 2
9 3.904853 4.967267 4.800914 2
10 3.104052 3.891720 5.165253 2
11 3.929249 5.301579 6.358856 3
12 6.150120 5.724055 5.391443 3
13 5.920788 7.114649 5.797759 3
14 5.902631 6.550044 5.726752 3
15 6.216153 7.236676 5.531300 3
I'm trying to create a function that will spread a number into an X number of groups that are approximately equal. For example, splitting 32 into 3 separate groups would result in 11, 11, and 10. Splitting 32 into 5 separate groups would result in 7, 7, 6, 6, 6.
I've found a lot of Python approaches, and I've found lots of R approaches that split samples. But, I haven't found any R specific approaches that focus on splitting a specific count rather than a sample.
Any help would be much appreciated!
A transcription of a python code provided by #Poe Dator:
int_split <- function(n, p) n %/% p + (sequence(p) - 1 < n %% p)
int_split(32, 3)
[1] 11 11 10
int_split(32, 5)
[1] 7 7 6 6 6
You could do:
split_count <- function(x, n){
grp <- rep(x%/%n, n)
y <- x%%n
grp[seq_len(y)] <- grp[seq_len(y)] + 1
grp
}
split_count(32, 2)
[1] 16 16
split_count(32, 5)
[1] 7 7 6 6 6
split_count(32, 3)
[1] 11 11 10
Here's a "Monte Carlo" approach. I generate a bunch (N) of random integers (size = grps) that sum to Num and then choose the combination with the least difference.
Num <- 32
grps <- 4
N <- 1000
tmp <- rmultinom(N,Num,rep(1/grps,grps))
i <- which.min(apply(tmp,2,function(x) sum(abs(diff(x)))))
tmp[,i]
Surely there has to be a function out there in some package for this?
I've searched and I've found this function to calculate the mode:
Mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
But I'd like a function that lets me easily calculate the 2nd/3rd/4th/nth most common value in a column of data.
Ultimately I will apply this function to a large number of dplyr::group_by()s.
Thank you for your help!
Maybe you could try
f <- function (x) with(rle(sort(x)), values[order(lengths, decreasing = TRUE)])
This gives unique vector values sorted by decreasing frequency. The first will be the mode, the 2nd will be 2nd most common, etc.
Another method is to based on table():
g <- function (x) as.numeric(names(sort(table(x), decreasing = TRUE)))
But this is not recommended, as input vector x will be coerced to factor first. If you have a large vector, this is very slow. Also on exit, we have to extract character names and of the table and coerce it to numeric.
Example
set.seed(0); x <- rpois(100, 10)
f(x)
# [1] 11 12 7 9 8 13 10 14 5 15 6 2 3 16
Let's compare with the contingency table from table:
tab <- sort(table(x), decreasing = TRUE)
# 11 12 7 9 8 13 10 14 5 15 6 2 3 16
# 14 14 11 11 10 10 9 7 5 4 2 1 1 1
as.numeric(names(tab))
# [1] 11 12 7 9 8 13 10 14 5 15 6 2 3 16
So the results are the same.
Here is an R function that I made (inspired by several other SO posts), which may work for your goal (and I use a local dataset on religious affiliation to illustrate it):
It's simple; only R base functions are involved: length, match, sort, tabulate, table, unique, which, as.character.
Find_Nth_Mode = function(d, N = 2) {
maxN = function(x, N){
len = length(x)
if(N>len){
warning('N greater than length(x). Setting N=length(x)')
N = length(x)
}
sort(x,partial=len-N+1)[len-N+1]
}
(ux = unique(as.character(d)))
(match(d, ux))
(a1 = tabulate(match(d, ux)))
(a2 = maxN(a1, N))
(a3 = which(a1 == a2))
(ux[a3])
}
Sample Output
> table(religion_data$relig11)
0.None 1.Protestant_Conservative 2.Protestant_Liberal 3.Catholic
34486 6134 19678 36880
4.Orthodox 5.Islam_Sunni 6.Islam_Shia 7.Hindu
20702 28170 668 4653
8.Buddhism 9.Jewish 10.Other
9983 381 6851
> Find_Nth_Mode(religion_data$relig11, 1)
[1] "3.Catholic"
> Find_Nth_Mode(religion_data$relig11, 2)
[1] "0.None"
> Find_Nth_Mode(religion_data$relig11, 3)
[1] "5.Islam_Sunni"
Reference:
I want to express my gratitude to these posts, from which I get the two functions and integrate them into one:
function to find the N th largest value: Fastest way to find second (third...) highest/lowest value in vector or column
how to find the second largest mode value?
Calculating the mode or 2nd/3rd/4th most common value
I'm new to R and can't seem to get to grips with how to call a previous value of "self", in this case previous "b" b[-1].
b <- ( ( 1 / 14 ) * MyData$High + (( 13 / 14 )*b[-1]))
Obviously I need a NA somewhere in there for the first calculation, but I just couldn't figure this out on my own.
Adding example of what the sought after result should be (A=MyData$High):
A b
1 5 NA
2 10 0.7142...
3 15 3.0393...
4 20 4.6079...
1) for loop Normally one would just use a simple loop for this:
MyData <- data.frame(A = c(5, 10, 15, 20))
MyData$b <- 0
n <- nrow(MyData)
if (n > 1) for(i in 2:n) MyData$b[i] <- ( MyData$A[i] + 13 * MyData$b[i-1] )/ 14
MyData$b[1] <- NA
giving:
> MyData
A b
1 5 NA
2 10 0.7142857
3 15 1.7346939
4 20 3.0393586
2) Reduce It would also be possible to use Reduce. One first defines a function f that carries out the body of the loop and then we have Reduce invoke it repeatedly like this:
f <- function(b, A) (A + 13 * b) / 14
MyData$b <- Reduce(f, MyData$A[-1], 0, acc = TRUE)
MyData$b[1] <- NA
giving the same result.
This gives the appearance of being vectorized but in fact if you look at the source of Reduce it does a for loop itself.
3) filter Noting that the form of the problem is a recursive filter with coefficient 13/14 operating on A/14 (but with A[1] replaced with 0) we can write the following. Since filter returns a time series we use c(...) to convert it back to an ordinary vector. This approach actually is vectorized as the filter operation is performed in C.
MyData$b <- c(filter(replace(MyData$A, 1, 0)/14, 13/14, method = "recursive"))
MyData$b[1] <- NA
again giving the same result.
Note: All solutions assume that MyData has at least 1 row.
There are a couple of ways you could do this.
The first method is a simple loop
df <- data.frame(A = seq(5, 25, 5))
df$b <- 0
for(i in 2:nrow(df)){
df$b[i] <- (1/14)*df$A[i]+(13/14)*df$b[i-1]
}
df
A b
1 5 0.0000000
2 10 0.7142857
3 15 1.7346939
4 20 3.0393586
5 25 4.6079758
This doesn't give the exact values given in the expected answer, but it's close enough that I've assumed you made a transcription mistake. Note that we have to assume that we can take the NA in df$b[1] as being zero or we get NA all the way down.
If you have heaps of data or need to do this a bunch of time the speed could be improved by implementing the code in C++ and calling it from R.
The second method uses the R function sapply
The form you present the problem in
is recursive, which makes it impossible to vectorise, however we can do some maths and find that it is equivalent to
We can then write a function which calculates b_i and use sapply to calculate each element
calc_b <- function(n,A){
(1/14)*sum((13/14)^(n-1:n)*A[1:n])
}
df2 <- data.frame(A = seq(10,25,5))
df2$b <- sapply(seq_along(df2$A), calc_b, df2$A)
df2
A b
1 10 0.7142857
2 15 1.7346939
3 20 3.0393586
4 25 4.6079758
Note: We need to drop the first row (where A = 5) in order for the calculation to perform correctly.
I've got a simple question that's stumping me. I'm trying to use a loop to count how many values of a vector fall in a bin (0,.01), (.01,.02), etc. For example (the loop does not work):
set.seed(12345)
x<- rnorm(100, 0, .05)
vec <- rep(NA, 11)
for(i in .01:.11){
vec[i] <- sum(x> i & x < (i +.01))
}
I would like this to ultimately produce a vector of the count between each break, such that the output for the above is:
5,9,10...
I think this may have something to do with the indexing/decimals. Thanks for any and all help.
You example contains negative numbers so I assume you are looking to do this with positive numbers. You should use cut to divide your vector into the given bins by setting breaks parameter. Then using table you can compute frequencies of x's falling within each interval.
## filter x
x <- x[x>=0.01] ## EDIT here : was x <- abs(x)
res <- table(cut(x,breaks=seq(round(min(x),2),round(max(x),2),0.01)))
## prettier output coerce to data.frame
as.data.frame(res)
# Var1 Freq
# 1 (0.01,0.02] 5
# 2 (0.02,0.03] 9
# 3 (0.03,0.04] 10
# 4 (0.04,0.05] 10
# 5 (0.05,0.06] 4
# 6 (0.06,0.07] 0
# 7 (0.07,0.08] 5
# 8 (0.08,0.09] 2
# 9 (0.09,0.1] 5
# 10 (0.1,0.11] 4
# 11 (0.11,0.12] 1