This may not be the best title, feel free to edit it.
x=c(NA,NA,NA,1,NA,NA,NA,NA,0,NA,NA,NA,1,NA,NA,0,NA,NA,NA,NA,1,NA,NA,NA,0,NA....)
or
x=c(NA,NA,NA,0,NA,NA,NA,NA,1,NA,NA,NA,0,NA,NA,1,NA,NA,NA,NA,0,NA,NA,NA,1,NA....)
y=c(seq(1:length(x)))
I would like z to be a new vector that is equal to y except when NAs are between 0 and 1 (not 1 and 0) where it should repeat the value taken when x=0
[1] 1 2 3 4 5 6 7 8 9 9 9 9 13 14 15 16 16 16 16 16 21 22 23 24 25 25
or
[1] 1 2 3 4 4 4 4 4 9 10 11 12 13 13 13 16 17 18 19 20 21 21 21 21 25 26
depending on x
I really don't know how to translate this condition in R.
My solution is clunkier than #James's (now deleted) answer but maybe (?) it's more flexible:
## identify strings of NAs preceded by 0
library(zoo)
na_following_zero <- na.locf(c(1,x))[-1]==0 & is.na(x)
## now identify the 'chunks' to reset
## (there may be a more elegant way to do this)
rr <- rle(na_following_zero)
startvals <- cumsum(c(0,rr$lengths))+1
endvals <- cumsum(rr$lengths)
values <- c(NA,y[startvals-1])
z <- y
## replace values in chunks
for (i in seq_along(rr$values)[rr$values])
z[startvals[i]:endvals[i]] <- values[i]
If time isn't prohibitive, you can just use a "for" loop:
z <- y
between.0.1 <- rep(FALSE, length(x))
for(i in 2:length(x)){
if(!is.na(x[i-1]) && x[i-1]==0){ # switch on after a 0
between.0.1[i] <- TRUE
value.at.0 <- y[i-1]
z[i] <- value.at.0
}
if(between.0.1[i-1]){ # if switched on, stay switched on
between.0.1[i] <- TRUE
z[i] <- value.at.0
}
if(!is.na(x[i]) && x[i]==1){ # switch off if at a 1
between.0.1[i] <- FALSE
value.at.0 <- NA
}
}
z[between.0.1] # diagnostic check
Another approach:
y0 <- which(x==0)
y1<-which(x==1)
# need a kicker to start with first zero
y1<-y1[y1>y0[1]]
# and check for end of sequence
if(y1[length(y1)]< length(x)) y1[(length(y1)+1] <- length(x)+1
#now y0 and y1 better be same length
z<-y
#now do a loop any way you want
for (jj in 1: length(y0) ) z[y0[jj]:(y1[jj]-1)]<-y[y0[jj]]
Rgames> z
[1] 1 2 3 4 4 4 4 4 9 10 11 12 13 13 13 16 17 18 19 20 21 21 21 21 25
[26] 26
Related
I'm looking for a way to extract evenly spaced elements in a vector. I'd like a general way to do this because I am trying to specify the values that I want in a plotly chart. I tried using pretty but that only seems to work with ggplot2.
I'm pretty much looking for an R version of this question that was answered for python.
Here's a sample set. This sample is a vector of 23 elements, a prime that cannot be factored.
x <- 1:23
Ideally, there would be a function that takes a number for the spacing (n) and that splits x into a subset of n evenly spaced values that also includes the first and last element. For example:
split_func(x, n = 4)
[1] 1 4 8 12 16 20 23
The output elements centered between the first and last elements and are spaced by 4, with the exception of the first/second and second-to-last/last elements.
A couple other examples:
split_func(x, n = 5)
[1] 1 5 10 15 20 23 # either this
[1] 1 4 9 14 19 23 # or this would work
split_func(1:10, n = 3)
[1] 1 3 6 9 10 # either this
[1] 1 2 5 8 10 # or this would work
split_func(1:27, n = 6)
[1] 1 5 11 17 23 27
Is there a function that does this already?
Try this:
split_func <- function(x, by) {
r <- diff(range(x))
out <- seq(0, r - by - 1, by = by)
c(round(min(x) + c(0, out - 0.51 + (max(x) - max(out)) / 2), 0), max(x))
}
split_func(1:23, 4)
# [1] 1 4 8 12 16 20 23
split_func(1:23, 5)
# [1] 1 4 9 14 19 23
split_func(1:10, 3)
# [1] 1 4 7 10
split_func(1:27, 6)
# [1] 1 5 11 17 23 27
I wish to locate the date of evenly-spaced events when given the number of events and the number of days in the period of interest. This seems like a trivial objective, but it is confusing me.
Here is a very simple example that has a straight-forward solution:
n.trips <- 5
n.days <- 20
mean.trips.per.day <- n.trips / n.days
cummulative.trips <- mean.trips.per.day * c(1:n.days)
cummulative.trips
#[1] 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
# 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00
# Find the date of each trip
which(cummulative.trips %in% c(1:n.days))
#[1] 4 8 12 16 20
But the following example is not straight-forward. Three possible solutions are shown but none match the desired result. In this example I am trying to pick out the locations of the six elements of the vector cummulative.trips that most closely match the integers 1:6. Those locations are shown in the vector desired.dates:
n.trips <- 6
n.days <- 17
# Here are the desired results
date.of.first.trip <- 3 # 1.0588235
date.of.second.trip <- 6 # 2.1176471
date.of.third.trip <- 8 # or 9: 2.8235294 3.1764706; 8 is the first
date.of.fourth.trip <- 11 # 3.8823529
date.of.fifth.trip <- 14 # 4.9411765
date.of.sixth.trip <- 17 # 6.0000000
desired.dates <- c(3,6,8,11,14,17)
mean.trips.per.day <- n.trips / n.days
cummulative.trips <- mean.trips.per.day * c(1:n.days)
cummulative.trips
#[1] 0.3529412 0.7058824 1.0588235 1.4117647 1.7647059
# 2.1176471 2.4705882 2.8235294 3.1764706 3.5294118
# 3.8823529 4.2352941 4.5882353 4.9411765 5.2941176 5.6470588 6.0000000
Here are three possible solutions I attempted:
# Find the date of each trip
which(cummulative.trips %in% c(1:n.days))
#[1] 17
which(round(cummulative.trips) %in% c(1:n.days))
#[1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
round(seq(1, n.days, length = n.trips))
#[1] 1 4 7 11 14 17
EDIT
I tried this function suggested by MrFlick in a comment, but it simply returns a result that essentially matches the result of the first of three approaches I tried above for my second example.
What is the fastest way to check if a number is a positive natural number? (in R)
is.naturalnumber <-
function(x, tol = .Machine$double.eps^0.5) x > tol & abs(x - round(x)) < tol
x <- cummulative.trips
is.naturalnumber(x)
#[1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE
Perhaps something like this will work
nearest_index <- function(targets, values) {
sapply(targets, function(x) which.min(abs(values-x)))
}
nearest_index(1:6, cummulative.trips)
# [1] 3 6 8 11 14 17
For each "target" value, we find the value that minimizes the difference between the observed values.
After checking #MrFlick's answer with a number of combinations of n.trips and n.days I discovered a scenario where his code did not return the answer I expected (n.trips <- 26; n.days <- 13). His code returned, assuming I used it correctly:
[1] 1 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 13
But I was expecting:
[1] 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 13 13
I probably should have explained my problem more clearly in my original post. I ended up writing the following for-loop and have tested it with 10 combinations of n.trips and n.days listed below. So far this for-loop seems to return what I expect for all 10 combinations. This code does incorporate #MrFlick's approach albeit is substantially modified form.
mean.trips.per.day <- n.trips / n.days
mean.trips.per.day
cummulative.trips.by.day <- mean.trips.per.day * c(1:n.days)
cummulative.trips.by.day
date.of.trip <- rep(0, n.trips)
for(i in 1:n.trips) {
trip.candidate.days <- which(round(cummulative.trips.by.day) >= i)
if(length(trip.candidate.days) > 0) date.of.trip[i] = trip.candidate.days[which.min(abs(cummulative.trips.by.day[trip.candidate.days] - i))]
# no dates have a value that rounds to >= i which suggests there was at most i-1 trips
if(length(trip.candidate.days) == 0) date.of.trip[i] = 0
}
cummulative.trips.by.day
date.of.trip
Here are the 10 combinations of n.trips and n.days I have used so far to test this code.
n.trips <- 12
n.days <- 12
n.trips <- 6
n.days <- 12
n.trips <- 5
n.days <- 13
n.trips <- 26
n.days <- 13
n.trips <- 28
n.days <- 13
n.trips <- 20
n.days <- 13
n.trips <- 0
n.days <- 13
n.trips <- 1
n.days <- 13
n.trips <- 2
n.days <- 13
n.trips <- 100
n.days <- 23
I need to create 10 bins with the most approximate frequency each; for this,
I am using the function "ClassInvervals" from the library (ClassInt) with the style
'quantile' for binning some data. This is working for must columns; but, when I have a column that has 1 number repeated too many times, it appears an error that says that some brackets are not unique, which makes sense assuming the last +30% of the column data is the same number so the function doesn't know how to split the bins.
What I would like to do is that if a number is greater than the 10% of the length of the column, then treat it as a different bin, and if not, then use the function as it is.
For example, let's assume we have this DF:
df <- read.table(text="
X
1 5
2 29
3 4
4 26
5 4
6 17
7 4
8 4
9 4
10 25
11 4
12 4
13 5
14 14
15 18
16 13
17 29
18 4
19 13
20 6
21 26
22 11
23 2
24 23
25 4
26 21
27 7
28 4
29 18
30 4",h=T,strin=F)
So in this case the 10% of the length would be 3, so if we create a table containing the frequency of each number, it would appear something like this:
2 1
4 11
5 2
6 1
7 1
11 1
13 2
14 1
17 1
18 2
21 1
23 1
25 1
26 2
29 2
With this info, first we should treat "4" as a unique bin.
So we have a final output more or less like this:
X Bins
1 5 [2,6)
2 29 [27,30)
3 4 [4]
4 26 [26,27)
5 4 [4]
6 17 [15,19)
7 4 [4]
8 4 [4]
9 4 [4]
10 25 [19,26)
11 4 [4]
12 4 [4]
13 5 [2,6)
14 14 [12,15)
15 18 [15,19)
16 13 [12,15)
17 29 [27,30)
18 4 [4]
19 13 [12,15)
20 6 [6,12)
21 26 [26,27)
22 11 [6,12)
23 2 [2,6)
24 23 [19,26)
25 4 [4]
26 21 [19,26)
27 7 [6,12)
28 4 [4]
29 18 [15,19)
30 4 [4]
Until now, my approach has been something like this:
Moda <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
Binner <- function(df) {
library(classInt)
#Input is a matrix that wants to be binned
for (c in 1:ncol(df)) {
if (sapply(df,class)[c]=="numeric") {
VectorTest <- df[,c]
# Here I get the 10% of the values
TenPer <- floor(length(VectorTest)/10)
while((sum(VectorTest == Moda(VectorTest)))>=TenPer) {
# in this loop I manage to remove the values that
# are repeated more than 10% but I still don't know how to add it as a special bin
VectorTest <- VectorTest[VectorTest!=Moda(VectorTest)]
Counter <- Counter +1
}
binsTest <- classIntervals(VectorTest_Fixed, 10- Counter, style = 'quantile')
binsBrakets <- cut(VectorTest, breaks = binsTest$brks)
df[ , paste0("Binned_", colnames(df)[c])] <- binsBrakets
}
}
return (df)
}
Can someone help me?
You could use cutr::smart_cut:
# devtools::install_github("moodymudskipper/cutr")
library(cutr)
df$Bins <- smart_cut(df$X,list(10,"balanced"),"g",simplify = F)
table(df$Bins)
#
# [2,4) [4,5) [5,6) [6,11) [11,14) [14,18) [18,21) [21,25) [25,29) [29,29]
# 1 11 2 2 3 2 2 2 3 2
more on cutr and smart_cut
you can create two different dataframes: one with the 10% bins and the rest with the cut created bins. Then bind them together (make sure the bins are strings).
library(magrittr)
#lets find the numbers that appear more than 10% of the time
large <- table(df$X) %>%
.[. >= length(df$X)/10] %>%
names()
#these numbers appear less than 10% of the time
left_over <- df$X[!df$X %in% large]
#we want a total of 10 bins, so we'll cut the data into 10 - the number of 10%
left_over_bins <- cut(left_over, 10 - length(large))
#Let's combine the information into a single data frame
numbers_bins <- rbind(
data.frame(
n = left_over,
bins = left_over_bins %>% as.character,
stringsAsFactors = F
),
data.frame(
n = df$X[df$X %in% large],
bins = df$X[df$X %in% large] %>% as.character,
stringsAsFactors = F
)
)
If you table the information you'll get something like this
table(numbers_bins$bins) %>% sort(T)
4 (1.97,5] (11,14] (23,26] (17,20]
11 3 3 3 2
(20,23] (26,29] (5,8] (14,17] (8,11]
2 2 2 1 1
I know how to take the lagged difference:
delX = diff(x)
But the only way I know to take the lagged sum is:
sumY = apply(embed(c(0,y),2),1, sum)
Is there a function that can take the lagged sum? This way (or sliding the index in some other fashion) is not very intuitive.
You're looking for filter:
x <- 1:10
filter(x, filter=c(1,1), sides=1)
# [1] NA 3 5 7 9 11 13 15 17 19
You could also use head and tail:
head(x, -1) + tail(x, -1)
# [1] 3 5 7 9 11 13 15 17 19
Two more options:
x <- 1:10
x + dplyr::lag(x)
# [1] NA 3 5 7 9 11 13 15 17 19
x + data.table::shift(x)
# [1] NA 3 5 7 9 11 13 15 17 19
Note that you can easily change the number of lags in both functions. Instead of lagging, you can also create a leading vector by using dplyr::lead() or data.table::shift(x, 1L, type = "lead"). Both functions also allow you to specify default values (which are NA by default).
With the help of sebastian-c, I figured out my problem with daily data. Please see: R ifelse condition: frequency of continuously NA
And now I have a data set with hourly data:
set.seed(1234)
day <- c(rep(1:2, each=24))
hr <- c(rep(0:23, 2))
v <- c(rep(NA, 48))
A <- data.frame(cbind(day, hr, v))
A$v <- sample(c(NA, rnorm(100)), nrow(A), prob=c(0.5, rep(0.5/100, 100)), replace=TRUE)
What I need to do is: If there are more(>=) 4 continuously missing day-hours(7AM-7PM) or >= 3 continuously missing night-hours(7PM-7AM), I will delete the entire day from the data frame, otherwise just run linear interpolation. Thus, the second day should be entirely deleted from the data frame since there are 4 continuously NA during day-time (7AM-10AM). The result is preferably remain data frame. Please help, thank you!
If I modify the NA_run function from the question you linked to take a variable named v instead of value and return the boolean rather than the data.frame:
NA_run <- function(x, maxlen){
runs <- rle(is.na(x$v))
any(runs$lengths[runs$values] >= maxlen)
}
I can then write a wrapper function to call it twice for daytime and nighttime:
dropfun <- function(x) {
dt <- x$hr > 7 & x$hr < 19
daytime <- NA_run(x[dt,], 4)
nighttime <- NA_run(x[!dt,], 3)
any(daytime, nighttime)
}
Which gives me a data.frame of days to drop.
> ddply(A, .(day), dropfun)
day V1
1 1 TRUE
2 2 FALSE
>
We can alter the dropfun to return the dataframe instead though:
dropfun <- function(x) {
dt <- x$hr > 7 & x$hr < 19
daytime <- NA_run(x[dt,], 4)
nighttime <- NA_run(x[!dt,], 3)
if(any(daytime, nighttime)) NULL else x
}
> ddply(A, .(day), dropfun)
day hr v
1 2 0 NA
2 2 1 NA
3 2 2 2.54899107
4 2 3 NA
5 2 4 -0.03476039
6 2 5 NA
7 2 6 0.65658846
8 2 7 0.95949406
9 2 8 NA
10 2 9 1.08444118
11 2 10 0.95949406
12 2 11 NA
13 2 12 -1.80603126
14 2 13 NA
15 2 14 NA
16 2 15 0.97291675
17 2 16 NA
18 2 17 NA
19 2 18 NA
20 2 19 -0.29429386
21 2 20 0.87820363
22 2 21 NA
23 2 22 0.56305582
24 2 23 -0.11028549
>