This may be a rather complex question so if someone can at least point me in the right direction I can probably figure out the rest on my own.
Sample data:
dat <- data.frame(A = c(1, 4, 5, 3, NA, 5), B = c(6, 5, NA, 5, 3, 5), C = c(5, 3, 1, 5, 3, 7), D = c(5, NA, 3, 10, 4, 5))
A B C D
1 1 6 5 5
2 4 5 3 NA
3 5 NA 1 3
4 3 5 5 10
5 NA 3 3 4
6 5 5 7 5
I would like to find all possible permutations of letter sequences of different lengths from the table shown above. For example, one valid letter sequence might be: A C A D D B. Another valid sequence could be B C C.
However, there are a few exceptions to this I'd like to follow:
1. Must be able to specify the minimum length of the returned sequence.
Note that in my example above, the min sequence length was 3 and the max sequence length was equal to the number of rows. I would like to be able to specify the min value (the max value will always be equal to the number of rows, 6 in the case of the sample data).
Note that if the sequence length is shorter than 6, it cannot be generated from skipping rows. In other words, any short sequences must come from consecutive rows. Clarification based on comments: Short sequences do not have to start on row 1. A short sequence could start on row 3 and continue onward through consecutive rows to row 6.
2. Letters with an NA value are not available for sampling.
Note that in row 2 there is an NA in the D column. This means that D would not be available for sampling in row 2. So A B D would be a valid combination but A D D would not be valid.
3. The sequences must be ranked based on the values in each cell.
Notice how each cell has a specific value in it. Each sequence chosen can be ranked by summing up the value shown in the table for the chosen letter. Using the example from above A C A D D B would have a rank of 1+3+5+10+4+5. So when generating all possible sequence they should be ordered from highest rank to lowest rank.
I would like to apply all three of these rules to the data table listed above to find all combinations of sequences possible of minimum length 3 and maximum length 6.
Please let me know if I need to clarify anything!
In principle, you want to do this using expand.grid I believe. Using your example data, I worked out the basics here:
dat <- data.frame(A = c(1, 4, 5, 3, NA, 5),
B = c(6, 5, NA, 5, 3, 5),
C = c(5, 3, 1, 5, 3, 7),
D = c(5, NA, 3, 10, 4, 5))
dat[,1][!is.na(dat[,1])] <- paste("A",na.omit(dat[,1]),sep="-")
dat[,2][!is.na(dat[,2])] <- paste("B",na.omit(dat[,2]),sep="-")
dat[,3][!is.na(dat[,3])] <- paste("C",na.omit(dat[,3]),sep="-")
dat[,4][!is.na(dat[,4])] <- paste("D",na.omit(dat[,4]),sep="-")
transp_data <- as.data.frame(t(dat))
data_list <- list(V1 = as.vector(na.omit(transp_data$V1)),
V2 = as.vector(na.omit(transp_data$V2)),
V3 = as.vector(na.omit(transp_data$V3)),
V4 = as.vector(na.omit(transp_data$V4)),
V5 = as.vector(na.omit(transp_data$V5)),
V6 = as.vector(na.omit(transp_data$V6)))
This code lets you essentially transform your data frame into a list of vectors of different lengths (one element for each variable in your original data, but omitting NAs and such). The reason you would want to do this is because it makes finding the acceptable combinations trivially easy by using the expand.grid function.
To solve for the six, you would simply use:
grid_6 <- do.call(what = expand.grid,
args = data_list)
This would give you a list of all possible permutations that met your criteria for the six (i.e. there were no NA elements). You can extract the numeric data back using some regular expressions (not a very vectorized way of doing it, but this is a complex thing that I don't have time to fully put into a function).
grid_6_letters <- grid_6
for(x in 1:ncol(grid_6_letters)) {
for(y in 1:nrow(grid_6_letters)) {
grid_6_letters[y,x] <- gsub(pattern = "-[0-9]*",replacement = "",x = grid_6_letters[y,x])
}
}
grid_6_numbers <- grid_6
for(x in 1:ncol(grid_6_numbers)) {
for(y in 1:nrow(grid_6_numbers)) {
grid_6_numbers[y,x] <- gsub(pattern = "^[ABCD]-",replacement = "",x = grid_6_numbers[y,x])
}
grid_6_numbers[[x]] <- as.numeric(grid_6_numbers[[x]])
}
grid_6_letters$Total <- rowSums(grid_6_numbers)
grid_6_letters <- grid_6_letters[order(grid_6_letters$Total,decreasing = TRUE),]
Anyway, if you wanted to get the various lower-level combinations, you could do it by simply using expand.grid on subsets of the list and combining them using rbind (with some judicious use of setNames as needed. Example:
grid_3 <- rbind(setNames(do.call(what = expand.grid,args = list(data_list[1:3],stringsAsFactors = FALSE)),nm = c("V1","V2","V3")),
setNames(do.call(what = expand.grid,args = list(data_list[2:4],stringsAsFactors = FALSE)),nm = c("V1","V2","V3")),
setNames(do.call(what = expand.grid,args = list(data_list[3:5],stringsAsFactors = FALSE)),nm = c("V1","V2","V3")),
setNames(do.call(what = expand.grid,args = list(data_list[4:6],stringsAsFactors = FALSE)),nm = c("V1","V2","V3")))
Anyway, with some time and programming, you can likely wrap this into a function that is much better than my example, but hopefully it will get you started.
Sorry I don't do any R anymore, so I'll try to help with a dirty code...
addPointsToSequence <- function(seq0, currRow){
i<-0;
for(i in 1:4){# 4 is the number of columns
seq2 = seq0
if (!is.na(dat[currRow,i])){
# add the point at the end of seq2
seq2 = cbind(seq2,dat[currRow,i])
# here I add the value, but you may prefer
# adding the colnames(dat)[i] and using the value to estimate the value of this sequence, in another variable
if(length(seq2) >= 3){
# save seq2 as an existing sequence where you need to
print (seq2)
}
if(currRow < 6){# 6 is the number of rows in dat (use nrow?)
addPointsToSequence(seq2, currRow+1)
}
}
}
}
dat <- data.frame(A = c(1, 4, 5, 3, NA, 5), B = c(6, 5, NA, 5, 3, 5), C = c(5, 3, 1, 5, 3, 7), D = c(5, NA, 3, 10, 4, 5))
for (startingRow in 1:4){
#4 is the last row you can start from to make a length3 sequence
emptySequence <- {};
addPointsToSequence(emptySequence , i);
}
Related
I have a vector as the following:
example <- c(1, 2, 3, 8, 10, 11)
And I am trying to write a function that returns an output as the one you would get from:
desired_output <- list(first_sequence = c(1, 2, 3),
second_sequence = 8,
third_sequence = c(10, 11)
)
Actually, what I want is to count how many sequences as of those there are in my vector, and the length of each one. It just happens that a list as the one in "desired_ouput" would be sufficient.
The finality is to construct another vector, let's call it "b", that contains the following:
b <- c(3, 3, 3, 1, 2, 2)
The real world problem behind this is to measure the height of 3d objects contained in a 3D pointcloud.
I've tried to program both a function that returns the list in "example_list" and a recursive function that directly outputs vector "b", succeeded at none.
Someone has any idea?
Thank you very much.
We can split to a list by creating a grouping by difference of adjacent elements
out <- split(example, cumsum(c(TRUE, abs(diff(example)) != 1)))
Then, we get the lengths and replicate
unname(rep(lengths(out), lengths(out)))
[1] 3 3 3 1 2 2
You could do:
out <- split(example, example - seq_along(example))
To get the lengths:
ln <- unname(lengths(out))
rep(ln, ln)
[1] 3 3 3 1 2 2
Here is one more. Not elegant but a different approach:
Create a dataframe of the example vector
Assign the elements to groups
aggregate with tapply
example_df <- data.frame(example = example)
example_df$group <- cumsum(ifelse(c(1, diff(example) - 1), 1, 0))
tapply(example_df$example, example_df$group, function(x) x)
$`1`
[1] 1 2 3
$`2`
[1] 8
$`3`
[1] 10 11
One other option is to use ave:
ave(example, cumsum(c(1, diff(example) != 1)), FUN = length)
# [1] 3 3 3 1 2 2
#or just
ave(example, example - seq(example), FUN = length)
I want to calculate the rolling sum of n rows in my dataset where the window size 'n' depends on the sum itself. For example, I want to slide the window as soon as the rolling sum of time exceeds 5 mins. Basically, I want to calculate how much distance the person traveled in the last 5 mins but the time steps are not equally spaced. Here's a dummy data.table for clarity (the last two columns are required):
I am looking for a data.table solution in R
Input data table:
ID
Distance
Time
1
2
2
1
4
1
1
2
1
1
2
2
1
3
3
1
6
3
1
1
1
Desired Output:
ID
Distance
Time
5.min.rolling.distance
5.min.rolling.time
1
2
2
NA
NA
1
4
1
NA
NA
1
2
1
NA
NA
1
2
2
10
6
1
3
3
5
5
1
6
3
9
6
1
1
1
10
7
Here is a solution that works with double time units as well as a simpler solution that will work with integer time units. I tested the double solution on 10,000 records and on my 2015 laptop it executed instantly. I can't make any guarantees about performance on 40 GB of data.
If you wanted to generalize this code I'd look at the RcppRoll package and learn how to implement c++ code in R.
Solution with double time units
I broke this down into two problems. First, figure out the window size by looking back until we get to at least 5 minutes (or run out of data). Second, take the sum of distances and time from the current observation to the look back unit.
Bad loop code in R usually tries to 'grow' a vector, its a huge efficiency gain to pre-allocate the vector length and then change elements in it.
input <- data.frame(
dist = c(2, 4, 2, 2, 3, 6, 1),
time = c(2, 1, 1, 2, 3, 3, 1)
)
var_window_cumsum <- function(input, MIN_TIME) {
if(is.null(input$time) | is.null(input$dist)) {
stop("input must have variables time and dist that record the row's duration and distance traveled.")
}
n <- nrow(input)
# First, figure out how far we need to look back to, this vector will store
# the position of the first record that gets our target record up to 5 min or
# more. If we cant look back to 5 min, we leave it as NA.
time_indx = rep(NA_integer_, length = n) # always preallocate your vector!
for(time in (1:n)) {
prior = time # start at self in case observation is already >= MIN_TIME
while(sum(input$time[time:prior]) < MIN_TIME & prior > 1) {
prior = prior - 1
}
# if we cant look back to our minimum time, leave the indx as NA
if (sum(input$time[time:prior]) >= MIN_TIME) {
time_indx[time] = prior
}
}
# Now that we know how far to look back, its easy to find out the total distance
# and total time.
dist5 = rep(NA_integer_, n)
time5 = rep(NA_integer_, n)
for (i in 1:n) {
dist5[i] <- ifelse(!is.na(time_indx[i]),
sum(input$dist[i:time_indx[i]]),
NA)
time5[i] <- ifelse(!is.na(time_indx[i]),
sum(input$time[i:time_indx[i]]),
NA)
}
cbind(input,
window_dist = dist5,
window_time = time5,
window_start = time_indx)
}
# output looks good
# Warning: example data does not include exhaustive cases
# I have not setup thorough testing
var_window_cumsum(input, 5)
# Test on a larger dataset, 10k records
set.seed(1234)
n <- 10000
med_input <- data.frame(
dist = sample(1:5, n, replace = TRUE),
time = sample(1:60, n, replace = TRUE) / 10
)
# you should inspect this to make sure there are no errors
med_output <- var_window_cumsum(med_input, 5)
Solution with integer time units
If your time unit is in integers and your data isn't too big, it may work to complete your dataset. This is a little bit of a hack, but here I create a continuos timeid variable that goes from the starting time to the maximum time, and create one row for each integer unit of time. From there its easy to calculate a rolling cumulative sum for the last five time units. Finally, we get rid of all the fake rows we added in (you want to make sure to do that because they will have invalid cumulative sum data. Also, important to note that I use roll_sumr and not roll_sum; roll_sumr includes 4 padding NA's on the left side of the output vector for the first 4 units.
library(tidyverse)
library(RcppRoll)
input <- data.frame(
dist = c(2, 4, 2, 2, 3, 6, 1),
time = c(2, 1, 1, 2, 3, 3, 1)
)
desired_dist5 <- c(NA, NA, NA, 10, 5, 9, 10)
desired_time5 <- c(NA, NA, NA, 6, 5, 6, 7)
output <- input %>%
mutate(timeid = cumsum(time),
realrow = TRUE) %>%
complete(timeid = 1:max(timeid)) %>%
mutate(dist5 = roll_sumr(dist, 5, na.rm = T),
time5 = roll_sumr(time, 5, na.rm = T)) %>%
filter(realrow) %>%
select(-c(realrow, timeid))
# Check against example table
output$dist5 == desired_dist5
output$time5 == desired_time5
I am trying to create a function that buys an N period high. So if I have a vector:
x = c(1, 2, 3, 4, 5, 1, 2, 3, 4, 5)
I want to take the rolling 3 period high. This is how I would like the function to look
x = c(1, 2, 3, 4, 5, 5, 5, 3, 4, 5)
I am trying to do this on an xts object.
Here is what I tried:
rollapplyr(SPY$SPY.Adjusted, width = 40, FUN = cummax)
rollapply(SPY$SPY.Adjusted, width = 40, FUN = "cummax")
rapply(SPY$SPY.Adjusted, width = 40, FUN = cummax)
The error I am receiving is:
Error in `dimnames<-.xts`(`*tmp*`, value = dn) :
length of 'dimnames' [2] not equal to array extent
Thanks in advance
You're close. Realize that rollapply (et al) is this case expecting a single number back, but cummax is returning a vector. Let's trace through this:
When using rollapply(..., partial=TRUE), the first pass is just the first number: 1
Second call, the first two numbers. You are expecting 2 (so that it will append to the previous step's 1), but look at cummax(1:2): it has length 2. Conclusion from this step: the cum functions are naïve in that they are relatively monotonic: they always consider everything up to and including the current number when they perform their logic/transformation.
Third call, our first visit to a full window (in this case): considering 1 2 3, we want 3. max works.
So I think you want this:
zoo::rollapplyr(x, width = 3, FUN = max, partial = TRUE)
# [1] 1 2 3 4 5 5 5 3 4 5
partial allows us to look at 1 and 1-2 before moving on to the first full window of 1-3. From the help page:
partial: logical or numeric. If 'FALSE' (default) then 'FUN' is only
applied when all indexes of the rolling window are within the
observed time range. If 'TRUE', then the subset of indexes
that are in range are passed to 'FUN'. A numeric argument to
'partial' can be used to determin the minimal window size for
partial computations. See below for more details.
Perhaps it is helpful -- if not perfectly accurate -- to think of cummax as equivalent to
rollapplyr(x, width = length(x), FUN = max, partial = TRUE)
# [1] 1 2 3 4 5 5 5 5 5 5
cummax(x)
# [1] 1 2 3 4 5 5 5 5 5 5
Thinking I can take the easy way out, I was going to use elseif to replace id codes in an entire dataset. I have a specific dataset with a id column. I have to replace these old ids with updated ids, but there are 50k+ rows with 270 unique ids. So, I first tried:
df$id<- ifelse(df$id== 2, 1,
ifelse(df$id== 3, 5,
ifelse(df$id == 4, 5,
ifelse(df$id== 6, NA,
ifelse(df$id== 7, 7,
ifelse(df$id== 285, NA,
ifelse(df$id== 8, 10,.....
ifelse(df$id=200, 19, df$id)
While this would have worked, I am limited to 51 nests, and I cannot separate them because it would only a 1/4 of the set. And then updates for first half would interfere as codes do overlap.
I then tried
df$id[df$id== 2] <- 1
and I was going to do that for every code. However, if I update all twos to one, there is still a later code in which old and new "1" will become X number, and I would only want the old "1" to become X... I actually think this takes out the if else even if 51 was not the limit. A function similar to vlookup in Excel? Any ideas?
Thanks!
Old forum related to replacing cell contents, but does not work in my case.
Replace contents of factor column in R dataframe
partial example
df <- data.frame(id=seq(1, 10))
old.id <- c(2, 3, 4, 6)
new.id <- c(1, 5, 5, NA)
df$id[df$id %in% old.id] <- new.id[unlist(sapply(df$id, function(x) which(old.id==x)))]
output
> df
id
1 1
2 1
3 5
4 5
5 5
6 NA
7 7
8 8
9 9
10 10
Assume you have a vector with runs of consecutive values:
v <- c(1, 1, 1, 2, 2, 2, 2, 1, 1, 3, 3, 3, 3)
How can it be best reduced to one value per run and the length of each run. I.e. the first run is 1 repeated two times; 2nd run: 2 repeated four times; 3rd run: 1 repeated two times, and so on:
v.df <- data.frame(value = c(1, 2, 1, 3),
repetitions = c(3, 4, 2, 4))
In a procedural language I might just iterate through a loop and build the data.frame as I go, but with a large dataset in R such an approach is inefficient. Any advice?
or more simply
data.frame(rle(v)[])
with(rle(v), data.frame(values, lengths))
should get you what you need.
values lengths
1 3
2 4
1 2
3 4