Adding a duration to a stopover/waypoint on Google maps - google-maps-api-3

Is is possible or are there any plans to allow way points that are stopovers to have a duration added to them that is then in turn added to the overall trip duration?
For example:
start at A
go via B as a stopover, wait 60 minutes
end at C
Either A to B or B to C should be reported as 60 minutes longer than the raw trip time.

Related

AWS DynamoDB matrix doesn't work correctly

I am using dynamoDB and curious of this
When I see table matrix with period of 10, 30 seconds, it seems that it exceeds provisioned value
Then, when I see table matrix with period of 1 minutes, it doesn't reach provisioned value at all
I want to know why this happens.
This isn't a DynamoDB thing. It's a CloudWatch rendering thing.
Consumed capacity has a natural period of 1 minute. When you set your graph period to 1 minute everything is correct. Consumed is below provisioned.
If you change the graph period to 30 seconds, your consumed view adjusts and you see consumption that's double what's real. The math behind the scenes divides by the wrong period. Graph period of 10 seconds, you get 6x reality. Graph period of 5 minutes, you get 1/5th reality.
The Provisioned line isn't based on an equation involving Period so it's not affected by the chosen period.
Maybe someone can comment on why the user is allowed to control the Period of the view but it just messes things up when it doesn't match the natural period of the data.

Retentions policy understand fro Graphite DB

I have the below retentions policy mention in storage-schemas.conf file
[metrics]
pattern = ^metrics.api.*
retentions = 10s:5m,1m:1d,1h:30d,1d:1y,30d:10y
Below is my understanding
this policy runs for the matched pattern starting with metrics.api*
1st: 10s:5m -> 1 or more times record inserted at 10s then its will take the latest record and maintain 1 datapoint , till 5min its maintains the history say suppose in 5m 5 datapoints added for the metrics key.
2nd:1m:1d -> this second run after the 5min overs for the same metrics key ,1 or more times record inserted at 1m then its will take the latest record and maintain 1 datapoint,till 1d its maintains the history say suppose in 1d 15 datapoints added for the metrics key.
so my question is what happens for these 2 retention is it will do the average 1st 5+15/2= 10 ? and get one average data point out of this 1st and 2nd rentions
--- its goes till 10years of data to be stored
can you please explain on the above retention polciy
aggregationMethod will be applied on this retention policy when switching boundaries.
First retention - 10s:5m means Graphite will store 30 datapoints (every 10 seconds for last 5 minutes) in archive 0.
Please note, that it will always store these datapoints, even if no data arrived. In that case Graphite will put NULLs there.
Then next retention - 1m:1d means that every minute whisper will take 6 of these 10s datapoints from archive 0, apply average() function and store them in archive 1.
But please note that Whisper will do so only if at least 3 (number of datapoints - 6 multiplied by xFilesFactor = 0.5) or more points in archive 0 have values (i.e. not NULLs). Otherwise Whisper decides that it has not enough data to propagate and put also NULL instead.
Etc - third retention 1h:30d means that 60 of datapoints from archive 1 will be aggregated using average function and propagated to archive 2, but only if at least 30 of them have value, etc.

Zabbix triggers - if power of fiber optical line (in dB) is continuously weakening

I have an interesting task to monitor network traffic.
I am monitoring lot of interfaces, mainly RX uplinks in our network. And I have normal graphs.
As you can see sometimes on some uplink I see traffic or power of fiber optical line continuously weakening.
I need to define triggers:
1 dB decreasing in 1 hour (if dB value will decrease minus 1 dB in last 1 hour, for example i have -30 dB and in 1 hour will decrease to -31 dB)
1,5 dB decreasing in 1 day
2 dB decreasing in 1 month
2,5 dB decreasing in 6 months
3 dB / 1 year
The goal is to get information "on time" about continuously weakening uplink, because we need to manage on time the repair of the cable.
See the "time shift" functionality in trigger functions - you can compare the last value with the value some time (one hour) ago.
More advanced alerting could be implemented with predictive trigger functions, although that does not seem to be necessary for your current needs.

Call STT based on customer response

I am placing calls to the customers using my own custom Asterisk Dialer,
When the customer picks up the call and starts speaking with us, the following are the cases.
The first response of customer has of the length of 3 seconds, the system responds
The second response of customer has of the length of 10 seconds, the system responds
Ideally, the system should call first STT API after 3 seconds when customers take a pause after completion of his sentence/response and second STT API after 10 seconds when customers take a pause after completion of his sentence/response
How do we achieve this? Our current dialer is calling STT API after a fixed interval of 3 seconds after the customers receive the call.

Measuring time difference between networked devices

I'm adding networked multiplayer to a game I've made. When the server sends an update packet to the client, I include a timestamp so that the client knows exactly when that information is valid. However, the server computer and the client computer might have their clocks set to different times (maybe even just a few seconds difference), so the timestamp from the server needs to be translated to the client's local time.
So, I'd like to know the best way to calculate the time difference between the server and the client. Currently, the client pings the server for a time stamp during initialization, takes note of when the request was sent and when it was answered, and guesses that the time stamp was generated roughly halfway along the journey. The client also runs 10 of these trials and takes the average.
But, the problem is that I'm getting different results over repeated runs of the program. Within each set of 10, each measurement rarely diverges by more than 400 milliseconds, which might be acceptable. But if I wait a few minutes between each run of the program, the resulting averages might disagree by as much as 2 seconds, which is not acceptable.
Is there a better way to figure out the difference between the clocks of two networked devices? Or is there at least a way to tweak my algorithm to yield more accurate results?
Details that may or may not be relevant: The devices are iPod Touches communicating over Bluetooth. I'm measuring pings to be anywhere from 50-200 milliseconds. I can't ask the users to sync up their clocks. :)
Update: With the help of the below answers, I wrote an objective-c class to handle this. I posted it on my blog: http://scooops.blogspot.com/2010/09/timesync-was-time-sink.html
I recently took a one-hour class on this and it wasn't long enough, but I'll try to boil it down to get you pointed in the right direction. Get ready for a little algebra.
Let s equal the time according to the server. Let c equal the time according to the client. Let d = s - c. d is what is added to the client's time to correct it to the server's time, and is what we need to solve for.
First we send a packet from the server to the client with a timestamp. When that packet is received at the client, it stores the difference between the given timestamp and its own clock as t1.
The client then sends a packet to the server with its own timestamp. The server sends the difference between the timestamp and its own clock back to the client as t2.
Note that t1 and t2 both include the "travel time" t of the packet plus the time difference between the two clocks d. Assuming for the moment that the travel time is the same in both directions, we now have two equations in two unknowns, which can be solved:
t1 = t - d
t2 = t + d
t1 + d = t2 - d
d = (t2 - t1)/2
The trick comes because the travel time is not always constant, as evidenced by your pings between 50 and 200 ms. It turns out to be most accurate to use the timestamps with the minimum ping time. That's because your ping time is the sum of the "bare metal" delay plus any delays spent waiting in router queues. Every once in a while, a lucky packet gets through without any queuing delays, so you use that minimum time as the most repeatable time.
Also keep in mind that clocks run at different rates. For example, I can reset my computer at home to the millisecond and a day later it will be 8 seconds slow. That means you have to continually readjust d. You can use the slope of various values of d computed over time to calculate your drift and compensate for it in between measurements, but that's beyond the scope of an answer here.
Hope that helps point you in the right direction.
Your algorithm will not be much more accurate unless you can use some statistical methods. First of all, 10 is probably not sufficient. The first and simplest change would be to gather 100 transit time samples and toss out the x longest and shortest.
Another thing to add would be that both clients send their own timestamp in each packet. Then you can also calculate how different their clocks are and check the average difference between the clocks.
You can also check up on STNP and NTP implementations specifically, as these protocols do this specifically.

Resources