Find 3 consecutive highest values in circular series in R - r

i have data in the following form (2 examples):
p1 <- structure(c(1.38172177074188, 1.18601365390563, 1.25131938561825,
1.07175353794277, 0.887770295772917, 0.806599968169486, 0.843543355495394,
0.889051695167723, 0.764131945540256, 0.699309441111923, 0.945165791967098,
1.31310409471336), .Dim = 12L)
p2 <- structure(c(1.24801075135611, 1.06280347993594, 1.21410288703334,
1.36797720634294, 1.07291218307332, 0.936924063490867, 0.819723966406961,
0.854960740335283, 0.718565087630857, 0.649827141012991, 0.785853771875901,
1.04368795443605), .Dim = 12L)
These are standardized monthly means of hydrological time series; so-called Pardé regimes that give some indication about annual seasonality. To do further analysis, i need to derive the 3 highest and lowest months from these Pardé series. Because seasonality can be bimodal, i need to identify the 3 highest/lowest consecutive data points (which are most often not the three absolute highest/lowest data points, see examples) to derive the timing of the most wet and dry periods. Up to now i failed because of the circular character of the time series, which poses a special challenge.
Any suggestions?

You could use filter. It sums consecutive values and can deal with circular time series.
f1 <- stats::filter(p1, c(1, 1, 1), circular = TRUE, sides = 1)
#Time Series:
# Start = 1
#End = 12
#Frequency = 1
#[1] 3.639992 3.880840 3.819055 3.509087 3.210843 2.766124 2.537914 2.539195 2.496727 2.352493 2.408607 2.957579
((which.max(f1) - (3:1)) %% 12) + 1
#[1] 12 1 2

Related

Why my interpolation function passing the days with NA values

I have 19 weather stations and having approx. 15 years daily rainfall data. I want to interpolate the rainfall data for a certain location. I have written a function for the interpolation as below:
Then, I have written a for loop to make the interpolation for each day of the time series. However, my station's data are not consistent. They include NA values. For example station 1 covers [2000-2014], but station 2 [2007-2009], station 3 [2004-2012] etc.
For this reason I want to add an if statement if number of available station is more than 3, interpolate the data ....
My codes are given below.
The problem is that, my code interpolates the data for the period when all stations having data (the period without NA).
How can I fix this?
Thanks
read the data "matrix having 19 cols, 6500 rows"
example
prec <-matrix(rnorm(123500) , nrow = 6500, ncol = 19)
statlist is the data frame including station coordinates
distt is the function calculationg the distances of the stations to my centeral point
Distt <- function(statlist) {
pointx =661967
pointy =277271
coordx=statlist$X
coordy=statlist$Y
dist = sqrt((coordx-pointx)^2+(coordy-pointy)^2)
distance <<- dist
}
the loop calculates the distances for all stations
for (i in 1:19) {
Distt(statlist)
}
interpolation function
idwfunc <- function(prec,distance) {
weight = 1/distance
interp = sum(prec*weight)/sum(weight)
interp <<- interp
}
calculate how many stations have data in each day
val_per_day <- apply(prec, 1, function (x) sum(!is.na(x)))
this is the number of stations plot
count how many days having more than 3 measurement
ii<-which(val_per_day > 3)[2]
output
output <- as.data.frame(matrix(NA, nrow=nrow(prec), ncol=1, dimnames=list(1:nrow(prec),c("interpolated_prec")) ) )
this is the for loop to interpolate the data for each day
for (iii in seq(1, length(val_per_day), by=1)){
if (as.numeric(val_per_day[iii])>2) {
output[iii,1]<-idwfunc(prec[iii,],distance)
}else{
output[iii,1]<- NA
}
}
this for loop interpolating the data for only the period without any NA values. However, I need a loop to calculate if 3 or more data are available in a day.

Use R to Efficiently Order Randomly Generated Transects

Problem
I looking for a way to efficiently order randomly selected sampling transects occur around a fixed object. These transects, once generated, need to be ordered in a way that makes sense spatially such that the distance traveled is minimized. This would be done by ensuring that the end point of the current transect is as close as possible to the start point of the next transect. Also, none of the transects can be repeated.
Because there are thousands of transects to order, and this is a very tedious task to do manually, and I am trying to use R to automate this process. I have already generated the transects, each having a start and end point whose location is indicated using a 360-degree system (e.g., 0 is North, 90 is East, 180 is South, and 270 is West). I have also generated some code that seems to indicate the start point and the ID for the next transect, but there are a few problems with this code: (1) it can generate errors depending on the start and end points being considered, (2) it doesn't achieve what I ultimately need it to achieve, and (3) as is, the code itself seems overly complicated and I can't help but wonder if there is a more straightforward way to do this.
Ideally, the code would result in the transects being reordered such they match the order that they should be flown rather than the order that they were initially input.
The Data
For simplicity, let's pretend there are just 10 transects to order.
# Transect ID for the start point
StID <- c(seq(1, 10, 1))
# Location of transect start point, based on a 360-degree circle
StPt <- c(342.1, 189.3, 116.5, 67.9, 72, 208.4, 173.2, 97.8, 168.7, 138.2)
# Transect ID for the end point
EndID <- c(seq(1, 10, 1))
# Location of transect start point, based on a 360-degree circle
EndPt <- c(122.3, 313.9, 198.7, 160.4, 166, 26.7, 312.7, 273.7, 288.8, 287.5)
# Dataframe
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
What I Have Tried
Please feel free to ignore this code, there has to be a better way and it does not really achieve the intended outcome. Right now I am using a nested for-loop that is very difficult to intuitively follow but represents my best attempt thus far.
# Create two new columns that will be populated using a loop
df$StPt_Next <- NA
df$ID_Next <- NA
# Also create a list to be populated as end and start points are matched
used <- c(df$StPt[1]) #puts the start point of transect #1 into the used vector since we will start with 1 and do not want to have it used again
# Then, for every row in the dataframe...
for (i in seq(1,length(df$EndPt)-1, 1)){ # Selects all rows except the last one as the last transect should have no "next" transect
# generate some print statements to indicate that the script is indeed running while you wait....
print(paste("######## ENDPOINT", i, ":", df$EndPt[i], " ########"))
print(paste("searching for a start point that fits criteria to follow this endpoint",sep=""))
# sequentially select each end point
valueEndPt <- df[i,1]
# and order the index by taking the absolute difference of end and start points and, if this value is greater than 180, also subtract from 360 so all differences are less than 180, then order differences from smallest to largest
orderx <- order(ifelse(360-abs(df$StPt-valueEndPt) > 180,
abs(df$StPt-valueEndPt),
360-abs(df$StPt-valueEndPt)))
tmp <- as.data.frame(orderx)
# specify index value
index=1
# for as long as there is an "NA" present in the StPt_Next created before for loop...
while (is.na(df$StPt_Next[i])) {
#select the value of the ordered index in sequential order
j=orderx[index]
# if the start point associated with a given index is present in the list of used values...
if (df$StPt[j] %in% used){
# then have R print a statement indicate this is the case...
print(paste("passing ",df$StPt[j], " as it has already been used",sep=""))
# and move onto the next index
index=index+1
# break statement intended to skip the remainder of the code for values that have already been used
next
# if the start point associated with a given index is not present in the list of used values...
} else {
# then identify the start point value associated with that index ID...
valueStPt <- df$StPt[j]
# and have R print a statement indicating an attempt is being made to use the next value
print(paste("trying ",df$StPt[j],sep=""))
# if the end transect number is different from the start end transect number...
if (df$EndID[i] != df$StID[j]) {
# then put the start point in the new column...
df$StPt_Next[i] <- df$StPt[j]
# note which record this start point came from for ease of reference/troubleshooting...
df$ID_Next[i] <- j
# have R print a statement that indicates a value for the new column has beed selected...
print(paste("using ",df$StPt[j],sep=""))
# and add that start point to the list of used ones
used <- c(used,df$StPt[j])
# otherwise, if the end transect number matches the start end transect number...
} else {
# keep NA in this column and try again
df$StPt_Next[i] <- NA
# and indicate that this particular matched pair can not be used
print(paste("cant use ",valueStPt," as the column EndID (related to index in EndPt) and StID (related to index in StPt) values are matching",sep=""))
}# end if else statement to ensure that start and end points come from different transects
# and move onto the next index
index=index+1
}# end if else statement to determine if a given start point still needs to be used
}# end while loop to identify if there are still NA's in the new column
}# end for loop
The Output
When the code does not produce an explicit error, such as for the example data provided, the output is as follows:
StPt StID EndPt EndID StPt_Next ID_Next
1 342.1 1 122.3 1 67.9 4
2 189.3 2 313.9 2 173.2 7
3 116.5 3 198.7 3 97.8 8
4 67.9 4 160.4 4 72.0 5
5 72.0 5 166.0 5 116.5 3
6 208.4 6 26.7 6 189.3 2
7 173.2 7 312.7 7 168.7 9
8 97.8 8 273.7 8 138.2 10
9 168.7 9 288.8 9 208.4 6
10 138.2 10 287.5 10 NA NA
The last two columns were generated by the code and added to the original dataframe. StPt_Next has the location of the next closest start point and ID_Next indicates the transectID associated with that next start point location. The ID_Next column indicates that the order transects should be flown is as follows 1,4,5,3,8,10,NA (aka. the end), and 2,7,9,6 form their own loop that goes back to 2.
There are two specific problems that I can't solve:
(1) There is a problem of forming one continuous chain of sequence. I think this is related to having 1 be the starting transect and 10 being the last transect no matter what, but not knowing how to indicate in the code that the second to last transect must match up with 10 so that the sequence includes all 10 transects before terminating at an "NA" representing the final end point.
(2) To really automate this process, after fixing the early termination of the sequence due to the premature introduction of the "NA" as the ID_next, a new column would be made that would allow the transects to be reordered based on the most efficient progression rather than the original order of their EndID/StartID.
Intended Outcome
If we pretend that we only had 6 transects to order and ignore the 4 that were not able to be ordered due to the premature introduction of the "NA", this would be the intended outcome:
StPt StID EndPt EndID StPt_Next ID_Next TransNum
1 342.1 1 122.3 1 67.9 4 1
4 67.9 4 160.4 4 72.0 5 2
5 72.0 5 166.0 5 116.5 3 3
3 116.5 3 198.7 3 97.8 8 4
8 97.8 8 273.7 8 138.2 10 5
10 138.2 10 287.5 10 NA NA 6
EDIT: A Note About the Error Message Explicitly Produced by the Code
As indicated earlier, the code has a few flaws. Another flaw is that it will often produce an error when trying to order a larger number of transects. I am not entirely sure at what step in the process the error is generated, but I am guessing that it is related to the inability to match up the last few transects, possibly due to not meeting the criteria set forth by "orderx". The print statements say "trying NA" instead of a start point in the database, which results in this error: "Error in if (df$EndID[i] != df$StID[j]) { : missing value where TRUE/FALSE needed". I am guessing that there would need to be another if-else statement that somehow indicates "if the remaining points do not meet the orderx criteria, then just force them to match up with whatever transect remains so that everything is assigned a StPt_Next and ID_Next".
Here is a larger dataset that will generate the error:
EndPt <- c(158.7,245.1,187.1,298.2,346.8,317.2,74.5,274.2,153.4,246.7,193.6,302.3,6.8,359.1,235.4,134.5,111.2,240.5,359.2,121.3,224.5,212.6,155.1,353.1,181.7,334,249.3,43.9,38.5,75.7,344.3,45.1,285.7,155.5,183.8,60.6,301,132.1,75.9,112,342.1,302.1,288.1,47.4,331.3,3.4,185.3,62,323.7,188,313.1,171.6,187.6,291.4,19.2,210.3,93.3,24.8,83.1,193.8,112.7,204.3,223.3,210.7,201.2,41.3,79.7,175.4,260.7,279.5,82.4,200.2,254.2,228.9,1.4,299.9,102.7,123.7,172.9,23.2,207.3,320.1,344.6,39.9,223.8,106.6,156.6,45.7,236.3,98.1,337.2,296.1,194,307.1,86.6,65.5,86.6,296.4,94.7,279.9)
StPt <- c(56.3,158.1,82.4,185.5,243.9,195.6,335,167,39.4,151.7,99.8,177.2,246.8,266.1,118.2,358.6,357.9,99.6,209.9,342.8,106.5,86.4,35.7,200.6,65.6,212.5,159.1,297,285.9,300.9,177,245.2,153.1,8.1,76.5,322.4,190.8,35.2,342.6,8.8,244.6,202,176.2,308.3,184.2,267.2,26.6,293.8,167.3,30.5,176,74.3,96.9,186.7,288.2,62.6,331.4,254.7,324.1,73.4,16.4,64,110.9,74.4,69.8,298.8,336.6,58.8,170.1,173.2,330.8,92.6,129.2,124.7,262.3,140.4,321.2,34,79.5,263,66.4,172.8,205.5,288,98.5,335.2,38.7,289.7,112.7,350.7,243.2,185.4,63.9,170.3,326.3,322.9,320.6,199.2,287.1,158.1)
EndID <- c(seq(1, 100, 1))
StID <- c(seq(1, 100, 1))
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
Any advice would be greatly appreciated!
As #chinsoon12 points out hidden in your problem you have an (Asymmetric) Traveling Salesman Problem. The asymmetry arises because the start and end points of your transecs are different.
ATSP is a renowned NP-complete problem. So exact solutions are very difficult even for medium sized problems (see wikipedia for more info). Hence the best we can do in most cases is approximations or heuristics. As you mention there are thousands of transects this is at least a medium sized problem.
Rather than code an ATSP approximation algorithm from the start, there is an existing TSP library for R. This includes several approximation algorithms. Reference documentation is here.
The follow is my use of the TSP package applied to your problem. Beginning with setup (assume I have run StPt, StID, EndPt, and EndID as in your question.
install.packages("TSP")
library(TSP)
library(dplyr)
# Dataframe
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
# filter to 6 example nodes for requested comparison
df = df %>% filter(StID %in% c(1,3,4,5,8,10))
We shall use ATSP from a distance matrix. Position [row,col] in the matrix is the cost/distance of going from (the end of) transect row to (the start of) transect col. This code creates the entire distance matrix.
# distance calculation
transec_distance = function(end,start){
abs_dist = abs(start-end)
ifelse(360-abs_dist > 180, abs_dist, 360-abs_dist)
}
# distance matrix
matrix_distance = matrix(data = NA, nrow = nrow(df), ncol = nrow(df))
for(start_id in 1:nrow(df)){
start_point = df[start_id,'StPt']
for(end_id in 1:nrow(df)){
end_point = df[end_id,'EndPt']
matrix_distance[end_id,start_id] = transec_distance(end_point, start_point)
}
}
Note that there are more effective ways to construct a distance matrix. However, I have chosen this approach for its clarity. Depending on your computer and the exact number of transects this code may run very slowly.
Also, note that the size of this matrix is quadratic to the number of transects. So for a large number of transects, you will discover there is not enough memory.
The solving is very unexciting. The distance matrix gets turned into a ATSP object, and the ATSP object gets passed to the solver. We then proceed to add the ordering/traveling information to the original df.
answer = solve_TSP(as.ATSP(matrix_distance))
# get length of cycle
print(answer)
# sort df to same order as solution
df_w_answer = df[as.numeric(answer),]
# add info about next transect to each transect
df_w_answer = df_w_answer %>%
mutate(visit_order = 1:nrow(df_w_answer)) %>%
mutate(next_StID = lead(StID, order_by = visit_order),
next_StPt = lead(StPt, order_by = visit_order))
# add info about next transect to each transect (for final transect)
df_w_answer[df_w_answer$visit_order == nrow(df_w_answer),'next_StID'] =
df_w_answer[df_w_answer$visit_order == 1,'StID']
df_w_answer[df_w_answer$visit_order == nrow(df_w_answer),'next_StPt'] =
df_w_answer[df_w_answer$visit_order == 1,'StPt']
# compute distance between end of each transect and start of next
df_w_answer = df_w_answer %>% mutate(dist_between = transec_distance(EndPt, next_StPt))
At this point we have a cycle. You can pick any node as the starting point, follow the order given in the df: from EndID to next_StID, and you will cover every transect in (a good approximation to) the minimum distance.
However in your 'intended outcome' you have a path solution (e.g. start at transect 1 and finish at transect 10). We can turn the cycle into a path by excluding the single most expensive transition:
# as path (without returning to start)
min_distance = sum(df_w_answer$dist_between) - max(df_w_answer$dist_between)
path_start = df_w_answer[df_w_answer$dist_between == max(df_w_answer$dist_between), 'next_StID']
path_end = df_w_answer[df_w_answer$dist_between == max(df_w_answer$dist_between), 'EndID']
print(sprintf("minimum cost path = %.2f, starting at node %d, ending at node %d",
min_distance, path_start, path_end))
Running all the above gives me a different, but superior, answer to your intended outcome. I get the following order: 1 --> 5 --> 8 --> 4 --> 3 --> 10 --> 1.
You path from transect 1 to transect 10 has a total distance of 428, if we also returned from transect 10 to transect 1, making this a cycle, the total distance would be 483.
Using the TSP package in R we get a path from 1 to 10 with total distance 377, and as a cycle 431.
If we instead start at node 4 and end at node 8, we get a total distance of 277.
Some additional nodes:
Not all TSP solvers are deterministic, hence you may get some variation in your answer if you run again, or run with the input rows in a different order.
TSP is a much more general problem that the transect problem you described. It is possible that your problem has enough additional/special features that means it can be solved perfectly in a reasonable length of time. But this moves your problem into the realm of mathematics.
If you are running out of memory to create the distance matrix, take a look at the documentation for the TSP package. It contains several examples that use geo-coordinates as inputs rather than a distance matrix. This is a much smaller input size (presumably the package calculates the distances on the fly) so if you convert the start and end points to coordinates and specify euclidean (or some other common distance function) you could get around (some) computer memory limits.
Another version of using the TSP package...
Here is the setup.
library(TSP)
planeDim = 15
nTransects = 26
# generate some random transect beginning points in a plane, the size of which
# is defined by planeDim
b = cbind(runif(nTransects)*planeDim, runif(nTransects)*planeDim)
# generate some random transect ending points that are a distance of 1 from each
# beginning point
e = t(
apply(
b,
1,
function(x) {
bearing = runif(1)*2*pi
x + c(cos(bearing), sin(bearing))
}
)
)
For fun, we can visualize the transects:
# make an empty graph space
plot(1,1, xlim = c(-1, planeDim + 1), ylim = c(-1, planeDim + 1), ty = "n")
# plot the beginning of each transect as a green point, the end as a red point,
# with a thick grey line representing the transect
for(i in 1:nrow(e)) {
xs = c(b[i,1], e[i,1])
ys = c(b[i,2], e[i,2])
lines(xs, ys, col = "light grey", lwd = 4)
points(xs, ys, col = c("green", "red"), pch = 20, cex = 1.5)
text(mean(xs), mean(ys), letters[i])
}
So given a matrix of x,y pairs ("b") for beginning points and a matrix of x,y
pairs ("e") for end points of each transect, the solution is...
# a function that calculates the distance from all endpoints in the ePts matrix
# to the single beginning point in bPt
dist = function(ePts, bPt) {
# apply pythagorean theorem to calculate the distance between every end point
# in the matrix ePts to the point bPt
apply(ePts, 1, function(p) sum((p - bPt)^2)^0.5)
}
# apply the "dist" function to all begining points to create the distance
# matrix. since the distance from the end of transect "foo" to the beginning of
# "bar" is not the same as from the end of "bar" to the beginning of "foo," we
# have an asymmetric travelling sales person problem. Therefore, distance
# matrix is directional. The distances at any position in the matrix must be
# the distance from the transect shown in the row label and to the transect
# shown in the column label.
distMatrix = apply(b, 1, FUN = dist, ePts = e)
# for covenience, we'll labels the trasects a to z
dimnames(distMatrix) = list(letters, letters)
# set the distance between the beginning and end of each transect to zero so
# that there is no "cost" to walking the transect
diag(distMatrix) = 0
Here is the upper left corner of the distance matrix:
> distMatrix[1:6, 1:6]
a b c d e f
a 0.00000 15.4287270 12.637979 12.269356 15.666710 12.3919715
b 13.58821 0.0000000 5.356411 13.840444 1.238677 12.6512352
c 12.48161 6.3086852 0.000000 8.427033 6.382304 7.1387840
d 10.69748 13.5936114 7.708183 0.000000 13.718517 0.9836146
e 14.00920 0.7736654 5.980220 14.470826 0.000000 13.2809601
f 12.24503 12.8987043 6.984763 2.182829 12.993283 0.0000000
Now three lines of code from the TSP package solves the problem.
atsp = as.ATSP(distMatrix)
tour = solve_TSP(atsp)
# assume we want to start our circuit at transect "a".
path = cut_tour(tour, "a", exclude_cut = F)
The variable path shows the order in which you should visit the transects:
> path
a w x q i o l d f s h y g v t k c m e b p u z j r n
1 23 24 17 9 15 12 4 6 19 8 25 7 22 20 11 3 13 5 2 16 21 26 10 18 14
We can add the path to the visualization:
for(i in 1:(length(path)-1)) {
lines(c(e[path[i],1], b[path[i+1],1]), c(e[path[i],2], b[path[i+1], 2]), lty = "dotted")
}
Thanks everyone for the suggestions, #Simon's solution was most tailored to my OP. #Geoffrey's actual approach of using x,y coordinates was great as it allows for the plotting of the transects and the travel order. Thus, I am posting a hybrid solution that was generated using code by both of them and well as additional comments and code to detail the process and get to the actual end result I was aiming for. I am not sure if this will help anyone in the future but, since there was no answer that provided a solution that solved my problem 100% of the way, I thought I'd share what I came up with.
As others have noted, this is a type of traveling salesperson problem. It is asymmetric because the distance from the end of transect "t" to the beginning of transect "t+1" is not the same as the distance from the end transect "t+1" to the start of transect "t". Also, it is a "path" solution rather than a "cycle" solution.
#=========================================
# Packages
#=========================================
library(TSP)
library(useful)
library(dplyr)
#=========================================
# Full dataset for testing
#=========================================
EndPt <- c(158.7,245.1,187.1,298.2,346.8,317.2,74.5,274.2,153.4,246.7,193.6,302.3,6.8,359.1,235.4,134.5,111.2,240.5,359.2,121.3,224.5,212.6,155.1,353.1,181.7,334,249.3,43.9,38.5,75.7,344.3,45.1,285.7,155.5,183.8,60.6,301,132.1,75.9,112,342.1,302.1,288.1,47.4,331.3,3.4,185.3,62,323.7,188,313.1,171.6,187.6,291.4,19.2,210.3,93.3,24.8,83.1,193.8,112.7,204.3,223.3,210.7,201.2,41.3,79.7,175.4,260.7,279.5,82.4,200.2,254.2,228.9,1.4,299.9,102.7,123.7,172.9,23.2,207.3,320.1,344.6,39.9,223.8,106.6,156.6,45.7,236.3,98.1,337.2,296.1,194,307.1,86.6,65.5,86.6,296.4,94.7,279.9)
StPt <- c(56.3,158.1,82.4,185.5,243.9,195.6,335,167,39.4,151.7,99.8,177.2,246.8,266.1,118.2,358.6,357.9,99.6,209.9,342.8,106.5,86.4,35.7,200.6,65.6,212.5,159.1,297,285.9,300.9,177,245.2,153.1,8.1,76.5,322.4,190.8,35.2,342.6,8.8,244.6,202,176.2,308.3,184.2,267.2,26.6,293.8,167.3,30.5,176,74.3,96.9,186.7,288.2,62.6,331.4,254.7,324.1,73.4,16.4,64,110.9,74.4,69.8,298.8,336.6,58.8,170.1,173.2,330.8,92.6,129.2,124.7,262.3,140.4,321.2,34,79.5,263,66.4,172.8,205.5,288,98.5,335.2,38.7,289.7,112.7,350.7,243.2,185.4,63.9,170.3,326.3,322.9,320.6,199.2,287.1,158.1)
EndID <- c(seq(1, 100, 1))
StID <- c(seq(1, 100, 1))
df <- cbind.data.frame(StPt, StID, EndPt, EndID)
#=========================================
# Convert polar coordinates to cartesian x,y data
#=========================================
# Area that the transect occupy in space only used for graphing
planeDim <- 1
# Number of transects
nTransects <- 100
# Convert 360-degree polar coordinates to x,y cartesian coordinates to facilitate calculating a distance matrix based on the Pythagorean theorem
EndX <- as.matrix(pol2cart(planeDim, EndPt, degrees = TRUE)["x"])
EndY <- as.matrix(pol2cart(planeDim, EndPt, degrees = TRUE)["y"])
StX <- as.matrix(pol2cart(planeDim, StPt, degrees = TRUE)["x"])
StY <- as.matrix(pol2cart(planeDim, StPt, degrees = TRUE)["y"])
# Matrix of x,y pairs for the beginning ("b") and end ("e") points of each transect
b <- cbind(c(StX), c(StY))
e <- cbind(c(EndX), c(EndY))
#=========================================
# Function to calculate the distance from all endpoints in the ePts matrix to a single beginning point in bPt
#=========================================
dist <- function(ePts, bPt) {
# Use the Pythagorean theorem to calculate the hypotenuse (i.e., distance) between every end point in the matrix ePts to the point bPt
apply(ePts, 1, function(p) sum((p - bPt)^2)^0.5)
}
#=========================================
# Distance matrix
#=========================================
# Apply the "dist" function to all beginning points to create a matrix that has the distance between every start and endpoint
## Note: because this is an asymmetric traveling salesperson problem, the distance matrix is directional, thus, the distances at any position in the matrix must be the distance from the transect shown in the row label and to the transect shown in the column label
distMatrix <- apply(b, 1, FUN = dist, ePts = e)
## Set the distance between the beginning and end of each transect to zero so that there is no "cost" to walking the transect
diag(distMatrix) <- 0
#=========================================
# Solve asymmetric TSP
#=========================================
# This creates an instance of the asymmetric traveling salesperson (ASTP)
atsp <- as.ATSP(distMatrix)
# This creates an object of Class Tour that travels to all of the points
## In this case, the repetitive_nn produces the smallest overall and transect-to-transect
tour <- solve_TSP(atsp, method = "repetitive_nn")
#=========================================
# Create a path by cutting the tour at the most "expensive" transition
#=========================================
# Sort the original data frame to match the order of the solution
dfTour = df[as.numeric(tour),]
# Add the following columns to the original dataframe:
dfTour = dfTour %>%
# Assign visit order (1 to 100, ascending)
mutate(visit_order = 1:nrow(dfTour)) %>%
# The ID of the next transect to move to
mutate(next_StID = lead(StID, order_by = visit_order),
# The angle of the start point for the next transect
next_StPt = lead(StPt, order_by = visit_order))
# lead() generates the NA's in the last record for next_StID, next_StPt, replace these by adding that information
dfTour[dfTour$visit_order == nrow(dfTour),'next_StID'] <-
dfTour[dfTour$visit_order == 1,'StID']
dfTour[dfTour$visit_order == nrow(dfTour),'next_StPt'] <-
dfTour[dfTour$visit_order == 1,'StPt']
# Function to calculate distance for 360 degrees rather than x,y coordinates
transect_distance <- function(end,start){
abs_dist = abs(start-end)
ifelse(360-abs_dist > 180, abs_dist, 360-abs_dist)
}
# Compute distance between end of each transect and start of next using polar coordinates
dfTour = dfTour %>% mutate(dist_between = transect_distance(EndPt, next_StPt))
# Identify the longest transition point for breaking the cycle
min_distance <- sum(dfTour$dist_between) - max(dfTour$dist_between)
path_start <- dfTour[dfTour$dist_between == max(dfTour$dist_between), 'next_StID']
path_end <- dfTour[dfTour$dist_between == max(dfTour$dist_between), 'EndID']
# Make a statement about the least cost path
print(sprintf("minimum cost path = %.2f, starting at node %d, ending at node %d",
min_distance, path_start, path_end))
# The variable path shows the order in which you should visit the transects
path <- cut_tour(tour, path_start, exclude_cut = F)
# Arrange df from smallest to largest travel distance
tmp1 <- dfTour %>%
arrange(dist_between)
# Change dist_between and visit_order to NA for transect with the largest distance to break cycle
# (e.g., we will not travel this distance, this represents the path endpoint)
tmp1[length(dfTour$dist_between):length(dfTour$dist_between),8] <- NA
tmp1[length(dfTour$dist_between):length(dfTour$dist_between),5] <- NA
# Set df order back to ascending by visit order
tmp2 <- tmp1 %>%
arrange(visit_order)
# Detect the break in a sequence of visit_order introduced by the NA (e.g., 1,2,3....5,6) and mark groups before the break with 0 and after the break with 1 in the "cont_per" column
tmp2$cont_per <- cumsum(!c(TRUE, diff(tmp2$visit_order)==1))
# Sort "cont_per" such that the records following the break become the beginning of the path and the ones following the break represent the middle orders and the point with the NA being assigned the last visit order, and assign a new visit order
tmp3 <- tmp2%>%
arrange(desc(cont_per))%>%
mutate(visit_order_FINAL=seq(1, length(tmp2$visit_order), 1))
# Datframe ordered by progression of transects
trans_order <- cbind.data.frame(tmp3[2], tmp3[1], tmp3[4], tmp3[3], tmp3[6], tmp3[7], tmp3[8], tmp3[10])
# Insert NAs for "next" info for final transect
trans_order[nrow(trans_order),'next_StPt'] <- NA
trans_order[nrow(trans_order), 'next_StID'] <- NA
#=========================================
# View data
#=========================================
head(trans_order)
#=========================================
# Plot
#=========================================
#For fun, we can visualize the transects:
# make an empty graph space
plot(1,1, xlim = c(-planeDim-0.1, planeDim+0.1), ylim = c(-planeDim-0.1, planeDim+0.1), ty = "n")
# plot the beginning of each transect as a green point, the end as a red point,
and a grey line representing the transect
for(i in 1:nrow(e)) {
xs = c(b[i,1], e[i,1])
ys = c(b[i,2], e[i,2])
lines(xs, ys, col = "light grey", lwd = 1, lty = 1)
points(xs, ys, col = c("green", "red"), pch = 1, cex = 1)
#text((xs), (ys), i)
}
# Add the path to the visualization
for(i in 1:(length(path)-1)) {
# This makes a line between the x coordinates for the end point of path i and beginning point of path i+1
lines(c(e[path[i],1], b[path[i+1],1]), c(e[path[i],2], b[path[i+1], 2]), lty = 1, lwd=1)
}
This is what the end result looks like

Time series with missing weekend value and keep date in plot

I have 1241 daily data from 2012-11-19 to 2017-10-16 but only for week day (for the number of service in a cafeteria). I'm trying to do to prediction,but I have trouble initializing my time series:
timeseries = ts(passage, frequency = 365,
start = c(2012, as.numeric(format(as.Date("2012-11-19"), "%j"))),
end = c(2017, as.numeric(format(as.Date("2017-10-16"), "%j"))) )
If I do like that, because of missing weekend, my variable will loop back after getting to 1241, all the way to 1791 (which correspond to the number of day between my 2 date) and if I want to make a train time series, choosing a date with the parameter "end" will make it not corresponding to the actual date's data.
So I can I overcome this problem? I know that I can create my time series directly with ( and I'm choosing the right frequency ?, if I put 5 or 7 the axis go into very far years)
timeseries = ts(passage, frequency = 365)
but I loose the ability to choose a start and en date and can't see that information in a plot
Edit: The reason I want to keep it to weekly data with 5 day is so when I plot the forecast, I don't get lots of zero in the plot
plot(forecast(timeseries_00))
like this
if I understand your problem correctly, this one could be a solution:
Step 1) I create a time series (passage) with length 1241 like yours.
passage<-rep(1:1241)
"passage" time series
Step 2) I convert the time series in a matrix where every single column is a working day (adding 4 zeros because the time series end at monday), after that I add two additional columns to the matrix with zero values (Saturday and Sunday), I come back to a time series using function unmatrix (package gdata) and I delete the last 6 zeros (4 added by myself and 2 coming from Sunday and Saturday columns)
passage_matrix<-cbind(t(matrix(c(passage,c(0,0,0,0)),nrow = 5)),0,0)
library(gdata)
passage_00<-as.numeric(unmatrix( passage_matrix ,byrow=T))
passage_00<-passage_00[1:(length(passage_00)-6)]
Step 3) I create my new time series
timeseries_00 = ts(passage_00,
frequency = 365,
start = c(2012, as.numeric(format(as.Date("2012-11-19"),
"%j"))))
Step 4) Now I'm able to plot the time series with correct date label (just for working days in my exemple below)
date<-seq(from=as.Date("2012-11-19"),by=1,length.out=length(timeseries_00))
plot(timeseries_00[timeseries_00>0],axes=F)
axis(1, at=1:length(timeseries_00[timeseries_00>0]), labels=date[timeseries_00>0])
"passage" time series with right date
Step 4) Forecast the time series
for_00<-forecast(timeseries_00)
Step 5) I have to modify my original time series in order to have same length beetween forecast data and original data
length(for_00$mean) #length of the prediction
passage_00extended<-c(passage_00,rep(0,730)) #Add zeros for future date
timeseries_00extended = ts(passage_00extended, frequency = 365,
start = c(2012, as.numeric(format(as.Date("2012-11-19"), "%j"))))
date<-seq(from=as.Date("2012-11-19"),by=1,length.out=length(timeseries_00extended))
Step 6) I have to modify predicted data in order to have the same length of timeseries_00extended, all fake data (0 values) are changed in "NA"
pred_mean<-c(rep(NA,length(passage_00)),for_00$mean) #Prediction mean
pred_upper<-c(rep(NA,length(passage_00)),for_00$upper[,2]) #Upper 95%
pred_lower<-c(rep(NA,length(passage_00)),for_00$lower[,2]) #Lower 95%
passage_00extended[passage_00extended==0]<-rep(NA,sum(passage_00extended==0))
Step 7) I plot original data (passage_00extended) and predictions on the same plot (with different colours for mean [blue] and upper and lower bound [orange])
plot(passage_00extended,axes=F,ylim=c(1,max(pred_upper[!is.na(pred_upper)])))
lines(pred_mean,col="Blue")
lines(pred_upper,col="orange")
lines(pred_lower,col="orange")
axis(1, at=1:length(timeseries_00extended), labels=date)
Plot: Forecast

How to apply a function to increasing subsets of data in a data frame

I wish to apply a set of pre-written functions to subsets of data in a data frame that progressively increase in size. In this example, the pre-written functions calculate 1) the distance between each consecutive pair of locations in a series of data points, 2) the total distance of the series of data points (sum of step 1), 3) the straight line distance between the start and end location of the series of data points and 4) the ratio between the straight line distance (step3) and the total distance (step 2). I wish to know how to apply these steps (and consequently similar functions) to sub-groups of increasing size within a data frame. Below are some example data and the pre-written functions.
Example data:
> dput(df)
structure(list(latitude = c(52.640715, 52.940366, 53.267749,
53.512608, 53.53215, 53.536443), longitude = c(3.305727, 3.103194,
2.973257, 2.966621, 3.013587, 3.002674)), .Names = c("latitude",
"longitude"), class = "data.frame", row.names = c(NA, -6L))
Latitude Longitude
1 52.64072 3.305727
2 52.94037 3.103194
3 53.26775 2.973257
4 53.51261 2.966621
5 53.53215 3.013587
6 53.53644 3.002674
Pre-written functions:
# Step 1: To calculate the distance between a pair of locations
pairdist = sapply(2:nrow(df), function(x) with(df, trackDistance(longitude[x-1], latitude[x-1], longitude[x], latitude[x], longlat=TRUE)))
# Step 2: To sum the total distance between all locations
totdist = sum(pairdist)
# Step 3: To calculate the distance between the first and end location
straight = trackDistance(df[1,2], df[1,1], df[nrow(df),2], df[nrow(df),1], longlat=TRUE)
# Step 4: To calculate the ratio between the straightline distance & total distance
distrat = straight/totdist
I would like to apply the functions firstly to a sub-group of only the first two rows (i.e. rows 1-2), then to a subgroup of the first three rows (rows 1-3), then four rows…and so on…until I get to the end of the data frame (in the example this would be a sub-group containing rows 1-6, but it would be nice to know how to apply this to any data frame).
Desired output:
Subgroup Totdist Straight Ratio
1 36.017 36.017 1.000
2 73.455 73.230 0.997
3 100.694 99.600 0.989
4 104.492 101.060 0.967
5 105.360 101.672 0.965
I have attempted to do this with no success and at the moment this is beyond my ability. Any advice would be very much appreciated!
There's a lot of optimization that can be done.
trackDistance() is vectorized, so you don't need apply for that.
to get a vectorized way of calculating the total distance, use cumsum()
You only need to calculate the pairwise distances once. Recalculating that every time you look at a different subset, is a waste of resources. So try to thinkg in terms of the complete data frame when constructing your functions.
To get everything in one function that outputs the desired data frame, you can do something along those lines :
myFun <- function(x){
# This is just to make typing easier in the rest of the function
lat <- x[["Latitude"]]
lon <- x[["Longitude"]]
nr <- nrow(x)
pairdist <-trackDistance(lon[-nr],lat[-nr],
lon[-1],lat[-1],
longlat=TRUE)
totdist <- cumsum(pairdist)
straight <- trackDistance(rep(lon[1],nr-1),
rep(lat[1],nr-1),
lon[-1],lat[-1],
longlat=TRUE)
ratio <- straight/totdist
data.frame(totdist,straight,ratio)
}
Proof of concept:
> myFun(df)
totdist straight ratio
1 36.01777 36.01777 1.0000000
2 73.45542 73.22986 0.9969293
3 100.69421 99.60013 0.9891346
4 104.49261 101.06023 0.9671519
5 105.35956 101.67203 0.9650005
Note that you can add extra arguments to define the latitude and longitude columns. And watch your capitalization, in your question you use Latitude in the data frame, but latitude (small l) in your code.

Creating a time offset for a zoo object

I have a zoo object that contains velocity data from two different points (V1 and V2), as well as particle Data from the same two points. The distance between the two points is 170m.
Date<- as.POSIXct("2012-01-01 08:00:00") + 1:120
V1<-rnorm(200,mean=5) #Velocity in m/sec
R<-rnorm(4,mean=3)
V2<-V1+R #Velocity in m/sec
Data1<-rnorm(200, mean=20)
Data2<-rnorm(200, mean=25)
V<-data.frame(V1,V2,Data1,Data2)
z<-zoo(as.matrix(V),order.by=Date)
L<-170 #Length =170m
If I average the velocity data
z$Avg_Vel<-rowMeans(z[,1:2])
I should have a pretty good idea of how fast the particles are traveling, and since I know the distance I should have a good idea of how long it is taking the particles to travel from Point 1 to Point 2 during the course of the time series.
z$Off<-L/z$Avg_Vel
But I cant figure out how to offset my zoo object to account for the time delay it takes for particles to travel between the two points. So if I am interested in finding the difference between Data 1 and Data 2, I don't want to do
Diff<-z$Data1-z$Data2
As this does not include the offset
If it takes 2 minutes for the particles to travel from point 1 to point 2, than I would want
Diff<-z$Data1-z$Data2(+2min)
So that I am looking at the difference between Data1 at time x, and Data2 at time x+2min
To clarify in response to an answer, the end result would be a rolling offset. So that
Offset<-z$Off
Looking at this kind of Offset
round(as.numeric(z$Off))
The result would look like this
1 Diff<- Diff<-z$Data1-z$Data2(+22 sec)
2 Diff<- Diff<-z$Data1-z$Data2(+23 sec)
3 Diff<- Diff<-z$Data1-z$Data2(+32 sec)..........
This is a way to include an offset:
offset <- 120 # 2 minutes in seconds
ix <- index(z) + offset # new time index
Calculate the difference with a 2-minute offset:
z$Data1[rev(index(z) %in% ix)] -
as.numeric(z$Data2[index(z) %in% ix])
Your example time series is too short for an offset of 2 minutes. I tested it with a 1-minute offset instead (offset = 60).
If you want to use a vector of offsets, use this:
offsets <- sample(1:5, nrow(z), TRUE) # some example offsets (in ms)
# alternatively you could use:
# round(as.numeric(z$Off))
ixs <- index(z) + offsets
ixs_num <- match(ixs, index(z), nomatch = NA)
z$Data1[seq(length(ixs_num))[!is.na(ixs_num)]] -
as.numeric(z$Data2)[na.omit(ixs_num)]
Note. This procedure works for both positive and negative offsets.

Resources