I have a timeseries data which contain some peaks and valleys which significantly differ from the threshold level (an example vector is provided below).
The peak/valley height/width may vary as well as the noise level.
I am interested in finding & reporting both peaks and valleys.
Currently I am using a function based on this thread:
Peak signal detection in realtime timeseries data
However, I would like to improve it according to my needs: I would like to introduce an additional parameter: I would like to ignore the peaks/valleys if they are not significant enough (below certain height - peaks, not enough downward). On the attached picture, I put red circles on peaks I would like to be reported; unmarked ones should be ignored).
Here is an example vector:
signal <- c(659, 613, 586, 642, 685, 695, 691, 733, 638, 708, 706, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 706, 651, 712, 722, 734, 705, 714, 691, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 651, 712, 722, 734, 705, 714, 691, 686, 676, 690, 693, 702, 694, 682, 693, 724, 693, 707, 684, 687, 705, 680, 680, 705, 680, 693, 700, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 673, 678, 699, 677, 680, 682, 676, 690, 658, 675, 663, 667, 682, 673, 675, 656, 652, 563, 544, 542, 540, 532, 538, 505, 526, 565, 629, 720, 713, 720, 720, 773, 732, 740, 695, 689, 723, 685, 726, 710, 684, 693, 715, 692, 683, 712, 707, 693, 699, 717, 703, 687, 682, 690, 716, 708, 713, 700, 676, 708, 691, 717, 711, 722, 688, 695, 641, 666, 638, 639, 600, 635, 609, 653, 671, 649, 716, 708, 713, 700, 676, 708, 691, 717, 711, 722, 700, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 769, 767, 740, 752, 686, 710, 702, 699, 702, 706, 651, 712, 722, 734, 705, 714, 691, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 665, 630, 808, 686, 787, 781, 796, 815, 786, 793, 664, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 706, 651, 712, 722, 734, 705, 714, 691, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711)
And here is a depiction of what I want (red circles - peaks to be reported):
I am the author of the original algorithm you were referring to.
To answer your question, let's first discuss the characteristics of your data:
The timeseries is stationary: the average value seems to be constant around 700
There are infrequent peaks, both up and down
The peaks do not change the distribution or the trend of the data
The peaks are clustered: if there is a peak (signal), the peak persists for approximately 20-30 data points
Because your timeseries is stationary, you should put the influence parameter close to zero. Because there are infrequent peaks, you should put the threshold high (above 3 std. devs). Because the peaks are clustered, you should put the lag parameter as high as you can.
For example, using:
lag <- 130
threshold <- 4
influence <- 0
Yields the following:
Here is the accompanying code:
ThresholdingAlgo <- function(y,lag,threshold,influence) {
signals <- rep(0,length(y))
filteredY <- y[0:lag]
avgFilter <- NULL
stdFilter <- NULL
avgFilter[lag] <- mean(y[0:lag], na.rm=TRUE)
stdFilter[lag] <- sd(y[0:lag], na.rm=TRUE)
for (i in (lag+1):length(y)){
if (abs(y[i]-avgFilter[i-1]) > threshold*stdFilter[i-1]) {
if (y[i] > avgFilter[i-1]) {
signals[i] <- 1;
} else {
signals[i] <- -1;
}
filteredY[i] <- influence*y[i]+(1-influence)*filteredY[i-1]
} else {
signals[i] <- 0
filteredY[i] <- y[i]
}
avgFilter[i] <- mean(filteredY[(i-lag):i], na.rm=TRUE)
stdFilter[i] <- sd(filteredY[(i-lag):i], na.rm=TRUE)
}
return(list("signals"=signals,"avgFilter"=avgFilter,"stdFilter"=stdFilter))
}
y <- c(659, 613, 586, 642, 685, 695, 691, 733, 638, 708, 706, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 706, 651, 712, 722, 734, 705, 714, 691, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 651, 712, 722, 734, 705, 714, 691, 686, 676, 690, 693, 702, 694, 682, 693, 724, 693, 707, 684, 687, 705, 680, 680, 705, 680, 693, 700, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 673, 678, 699, 677, 680, 682, 676, 690, 658, 675, 663, 667, 682, 673, 675, 656, 652, 563, 544, 542, 540, 532, 538, 505, 526, 565, 629, 720, 713, 720, 720, 773, 732, 740, 695, 689, 723, 685, 726, 710, 684, 693, 715, 692, 683, 712, 707, 693, 699, 717, 703, 687, 682, 690, 716, 708, 713, 700, 676, 708, 691, 717, 711, 722, 688, 695, 641, 666, 638, 639, 600, 635, 609, 653, 671, 649, 716, 708, 713, 700, 676, 708, 691, 717, 711, 722, 700, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 769, 767, 740, 752, 686, 710, 702, 699, 702, 706, 651, 712, 722, 734, 705, 714, 691, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 715, 691, 670, 684, 689, 711, 673, 688, 699, 677, 701, 680, 692, 686, 710, 702, 699, 665, 630, 808, 686, 787, 781, 796, 815, 786, 793, 664, 717, 691, 703, 712, 715, 700, 693, 682, 717, 711, 722, 700, 704, 704, 715, 691, 670, 684, 689, 711, 680, 692, 686, 710, 702, 699, 702, 706, 651, 712, 722, 734, 705, 714, 691, 704, 704, 669, 712, 715, 689, 715, 691, 670, 684, 689, 711)
lag <- 130
threshold <- 4
influence <- 0
# Run algo with lag = 30, threshold = 5, influence = 0
result <- ThresholdingAlgo(y,lag,threshold,influence)
# Plot result
par(mfrow = c(2,1),oma = c(2,2,0,0) + 0.1,mar = c(0,0,2,1) + 0.2)
plot(1:length(y),y,type="l",ylab="",xlab="")
lines(1:length(y),result$avgFilter,type="l",col="cyan",lwd=2)
lines(1:length(y),result$avgFilter+threshold*result$stdFilter,type="l",col="green",lwd=2)
lines(1:length(y),result$avgFilter-threshold*result$stdFilter,type="l",col="green",lwd=2)
plot(result$signals,type="S",col="red",ylab="",xlab="",ylim=c(-1.5,1.5),lwd=2)
}
Now if you also wish to find the minimum or maximum value within each signal, you can simply overlay the result array on your y (data) array and identify where it achieves the highest or lowest value.
For example (using explicit loops instead of a quick one-liner):
cluster$flag <- FALSE
cluster$sign <- FALSE
cluster$data <- NULL
cluster$index <- NULL
for(i in 1:length(result$signals)) {
if (abs(result$signals[i]) == 1.0) {
cluster$flag <- TRUE
cluster$sign <- result$signals[i]
cluster$data <- append(cluster$data, y[i])
cluster$index <- append(cluster$index, i)
} else {
if (cluster$flag) {
if (cluster$sign == 1) {
print(cat("The positive cluster starting at", cluster$index[1], "has maximum", max(cluster$data), "\n"), sep='')
} else {
print(cat("The negative cluster starting at", cluster$index[1], "has minimum", min(cluster$data), "\n"), sep='')
}
}
cluster$flag <- FALSE
cluster$sign <- NULL
cluster$data <- NULL
cluster$index <- NULL
}
}
This gives the following output:
The negative cluster starting at 158 has minimum 505
The positive cluster starting at 410 has maximum 808
The positive cluster starting at 412 has maximum 815
Hope that helps!
Let's start by plotting your data, along with its mean:
plot(signal, type = 'l')
abline(h = mean(signal), col = 'blue3', lty = 2)
Since you want the upwards peak near the end of the series to be included but not the downwards spike at the very start, we need to find the number of standard deviations that would identify this peak but not the one at the start. This turns out to be about 3.5 standard deviations, as we can see if we plot the 3.5 standard deviation lines:
abline(h = mean(signal) + c(3.5, -3.5) * sd(signal), lty = 2, col = 'red')
Now comes the tricky part. We use run length encoding to identify which contiguous parts of the sequence are outside of the 3.5 standard deviations, and find the point of largest absolute deviation for each:
big <- abs(signal - mean(signal)) > 3.5 * sd(signal)
exceed <- split(seq_along(big), data.table::rleid(big))[rle(big)$value]
peaks <- sapply(exceed, function(x) x[which.max(abs(signal[x] - mean(signal)))])
Now the vector peaks will contain only the maximum or minimum point within each spike. Note although you only have two spikes in your sample data, this will work for however many spikes you have.
To demonstrate, let us plot the points:
points(peaks, signal[peaks], col = 'red', pch = 16)
Maybe you want something like this:
max.index <- which.max(signal)
max <- max(signal)
min.index <- which.min(signal)
min <- min(signal)
plot(signal, type = "l")
points(max.index, max, col = "red", pch = 16)
points(min.index, min, col = "red", pch = 16)
Output:
Being new to OR-Tools libraries I am unable to modify the existing code for my requirements. I'm trying to solve a routing problem based on a graph structure. Right now the distance matrix is configured in a way that each node/location always has a connection to every other node/location. Is it possible to change the distance matrix in a way (sth. link enter -1) to show google or-tools that there are node/locations which don't have connections to certain other nodes?
"""Simple Pickup Delivery Problem (PDP)."""
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
def create_data_model():
"""Stores the data for the problem."""
data = {}
data['distance_matrix'] = [
[
0, 548, 776, 696, 582, 274, 502, 194, 308, 194, 536, 502, 388, 354,
468, 776, 662
],
[
548, 0, 684, 308, 194, 502, 730, 354, 696, 742, 1084, 594, 480, 674,
1016, 868, 1210
],
[
776, 684, 0, 992, 878, 502, 274, 810, 468, 742, 400, 1278, 1164,
1130, 788, 1552, 754
],
[
696, 308, 992, 0, 114, 650, 878, 502, 844, 890, 1232, 514, 628, 822,
1164, 560, 1358
],
[
582, 194, 878, 114, 0, 536, 764, 388, 730, 776, 1118, 400, 514, 708,
1050, 674, 1244
],
[
274, 502, 502, 650, 536, 0, 228, 308, 194, 240, 582, 776, 662, 628,
514, 1050, 708
],
[
502, 730, 274, 878, 764, 228, 0, 536, 194, 468, 354, 1004, 890, 856,
514, 1278, 480
],
[
194, 354, 810, 502, 388, 308, 536, 0, 342, 388, 730, 468, 354, 320,
662, 742, 856
],
[
308, 696, 468, 844, 730, 194, 194, 342, 0, 274, 388, 810, 696, 662,
320, 1084, 514
],
[
194, 742, 742, 890, 776, 240, 468, 388, 274, 0, 342, 536, 422, 388,
274, 810, 468
],
[
536, 1084, 400, 1232, 1118, 582, 354, 730, 388, 342, 0, 878, 764,
730, 388, 1152, 354
],
[
502, 594, 1278, 514, 400, 776, 1004, 468, 810, 536, 878, 0, 114,
308, 650, 274, 844
],
[
388, 480, 1164, 628, 514, 662, 890, 354, 696, 422, 764, 114, 0, 194,
536, 388, 730
],
[
354, 674, 1130, 822, 708, 628, 856, 320, 662, 388, 730, 308, 194, 0,
342, 422, 536
],
[
468, 1016, 788, 1164, 1050, 514, 514, 662, 320, 274, 388, 650, 536,
342, 0, 764, 194
],
[
776, 868, 1552, 560, 674, 1050, 1278, 742, 1084, 810, 1152, 274,
388, 422, 764, 0, 798
],
[
662, 1210, 754, 1358, 1244, 708, 480, 856, 514, 468, 354, 844, 730,
536, 194, 798, 0
],
]
data['pickups_deliveries'] = [
[1, 6],
[2, 10],
[4, 3],
[5, 9],
[7, 8],
[15, 11],
[13, 12],
[16, 14],
]
data['num_vehicles'] = 4
data['depot'] = 0
return data
def print_solution(data, manager, routing, solution):
"""Prints solution on console."""
print(f'Objective: {solution.ObjectiveValue()}')
total_distance = 0
for vehicle_id in range(data['num_vehicles']):
index = routing.Start(vehicle_id)
plan_output = 'Route for vehicle {}:\n'.format(vehicle_id)
route_distance = 0
while not routing.IsEnd(index):
plan_output += ' {} -> '.format(manager.IndexToNode(index))
previous_index = index
index = solution.Value(routing.NextVar(index))
route_distance += routing.GetArcCostForVehicle(
previous_index, index, vehicle_id)
plan_output += '{}\n'.format(manager.IndexToNode(index))
plan_output += 'Distance of the route: {}m\n'.format(route_distance)
print(plan_output)
total_distance += route_distance
print('Total Distance of all routes: {}m'.format(total_distance))
def main():
"""Entry point of the program."""
# Instantiate the data problem.
data = create_data_model()
# Create the routing index manager.
manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']),
data['num_vehicles'], data['depot'])
# Create Routing Model.
routing = pywrapcp.RoutingModel(manager)
# Define cost of each arc.
def distance_callback(from_index, to_index):
"""Returns the manhattan distance between the two nodes."""
# Convert from routing variable Index to distance matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
# Add Distance constraint.
dimension_name = 'Distance'
routing.AddDimension(
transit_callback_index,
0, # no slack
3000, # vehicle maximum travel distance
True, # start cumul to zero
dimension_name)
distance_dimension = routing.GetDimensionOrDie(dimension_name)
distance_dimension.SetGlobalSpanCostCoefficient(100)
# Define Transportation Requests.
for request in data['pickups_deliveries']:
pickup_index = manager.NodeToIndex(request[0])
delivery_index = manager.NodeToIndex(request[1])
routing.AddPickupAndDelivery(pickup_index, delivery_index)
routing.solver().Add(
routing.VehicleVar(pickup_index) == routing.VehicleVar(
delivery_index))
routing.solver().Add(
distance_dimension.CumulVar(pickup_index) <=
distance_dimension.CumulVar(delivery_index))
# Setting first solution heuristic.
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PARALLEL_CHEAPEST_INSERTION)
# Solve the problem.
solution = routing.SolveWithParameters(search_parameters)
# Print solution on console.
if solution:
print_solution(data, manager, routing, solution)
if __name__ == '__main__':
main()
This question already has answers here:
Find sustained peaks using pracma::findpeaks
(1 answer)
Identify sustained peaks using pracma::findpeaks
(2 answers)
Closed 2 years ago.
I've got some data with 23 peaks. I've used pracma::findpeaks to pick out the peaks. However, one of the peaks has two identical values adjacent each other, at time=7524 and time=7525. It seems findpeaks deals with this by ignoring the peak.
Could I please ask if someone could help me make it recognise it. I'd like it to pick out the first of the two peaks, though it would also be good to know how to make it pick out the last of them as well
data <- data.frame(time=c(1562, 1563, 1564, 1565, 1566, 1810, 1811, 1812, 1813, 1814,
2058, 2059, 2060, 2061, 2306, 2307, 2308, 2309, 2310, 2560, 2561,
2562, 2563, 2564, 3064, 3065, 3066, 3067, 3580, 3581, 3582, 3583,
3584, 4095, 4096, 4097, 4098, 4099, 4610, 4611, 4612, 4613, 4614,
5128, 5129, 5130, 5131, 5132, 5133, 5637, 5638, 5639, 5640, 5641,
5876, 5877, 5878, 5879, 5880, 5881, 5882, 6125, 6126, 6127, 6128,
6129, 6130, 6607, 6608, 6609, 6610, 6611, 6612, 6613, 7072, 7073,
7074, 7075, 7076, 7077, 7078, 7079, 7519, 7520, 7521, 7522, 7523,
7524, 7525, 7526, 7527, 7528, 7941, 7942, 7943, 7944, 7945, 7946,
7947, 7948, 7949, 8342, 8343, 8344, 8345, 8346, 8347, 8348, 8349,
8350, 8351, 8708, 8709, 8710, 8711, 8712, 8713, 8714, 8715, 8716,
8717, 8718, 9045, 9046, 9047, 9048, 9049, 9050, 9051, 9052, 9053,
9054, 9055, 9352, 9353, 9354, 9355, 9356, 9357, 9358, 9359, 9360,
9361, 9362, 9363, 9624, 9625, 9626, 9627, 9628, 9629, 9630, 9631,
9632, 9633, 9634, 9867, 9868, 9869, 9870, 9871, 9872, 9873, 9874,
9875, 9876),
value=c(509, 672, 758, 686, 584, 559, 727, 759, 688, 528, 562, 711,
768, 678, 644, 750, 822, 693, 531, 566, 738, 793, 730, 511, 587,
739, 761, 651, 579, 747, 768, 705, 544, 551, 687, 756, 749, 645,
564, 680, 724, 691, 596, 535, 625, 685, 689, 612, 512, 537, 616,
657, 653, 573, 506, 598, 675, 685, 668, 609, 515, 575, 656, 687,
678, 626, 533, 509, 587, 641, 680, 663, 602, 515, 505, 583, 646,
693, 696, 684, 630, 549, 500, 572, 637, 681, 725, 736, 736, 703,
649, 556, 568, 637, 682, 743, 765, 767, 709, 660, 587, 548, 622,
690, 761, 779, 764, 749, 694, 631, 525, 571, 646, 724, 788, 811,
834, 818, 776, 712, 616, 536, 556, 649, 738, 801, 857, 866, 837,
808, 718, 647, 568, 508, 605, 714, 823, 872, 917, 916, 890, 825,
742, 642, 543, 549, 656, 766, 851, 921, 947, 951, 892, 830, 730,
617, 586, 675, 760, 804, 816, 795, 740, 690, 613, 522))
peaks <- data.frame(findpeaks(data$value, npeaks=23, threshold=100, sortstr=TRUE))
data$n <- seq(1,length(data$value))
data <- merge(x=data, y=peaks, by.x="n", by.y="X2", all.x=TRUE, all.y=TRUE)
ggplot(data, aes(x=time, y=value)) +
geom_col(fill="red") +
geom_point(aes(x=time, y=X1))