Is it possible to retrieve duration of less than 1 second from the Google Maps Distance Matrix API? - google-maps-api-3

I'm trying to reduce uncertainty when using the Google Maps Distance Matrix API to extract journey time and distance data between a start and end node on the road network to calculate average speeds over fairly short distances (30m to 500m).
I am using the Python Googlemaps library
The standard journey time provided by the API is at 1 second i.e. integer resolution. Does anyone know if there is a command to extract the journey time at a finer temporal resolution of e.g 0.1 seconds when calling journey distance and duration data from the API?

According to the "unit systems" section in the API documentation,
duration: The length of time it takes to travel this route, expressed in seconds (the value field) and as text. The textual representation is localized according to the query's language parameter.
There are no Required or Optional parameters that can change this setting to return values expressed as fractions of a second.

Related

Does `continued_fraction(d)` for a quadratic field element d compute whole preperiod and period eagerly?

New to sage, toying with continued fractions. I noticed that my code experiences performance degradation when I use continued_fraction(d), where d is represented as a QuadraticField element, and the CF has long period (in the thousands). There is no slowness if the same value is represented as a regular sage.symbolic.expression.Expression.
Does sage compute the preperiod and period eagerly for elements of quadratic fields? Am I better off not using QuadraticField values if all I need is a constant number of first partial quotients regardless of period length?

How to interpret "nearest source location" in the cumulativeCost-function of Google Earth Engine?

I am wondering what the documentation of cumulativeCost() in GEE exactly means by "nearest source location".
"nearest" in terms of "the closest starting pixel, linearly computed" or
"nearest" in terms of "the closest in terms of cumulative cost" ?
For my analysis I would like to know if the algorithm already reduces the number of potential routes in advance (by choosing only 1 starting point in advance) OR if it first tries the routes from one pixel to all possible starting points, and then takes the value to the starting pixel with the lowest total cost. Does anyone have more detailed information on how the algorithm works in that case? Thanks.

Path finding - Merging different cost functions

In my path finding school project, the user is given 3 options to navigate between two points:
Shortest path (Kilometers). I've defined the cost function for each 2 points to be the distance of the road that connects them.
Fastest path (Each road has a speed limit). I've defined the cost function between each 2 points to be 1/(SpeedLimit).
Simplest path (Minimizes turns, a turn is defined if the road changes direction by more than alpha degrees). I've defined a state to be a tuple of a point and direction, and defined the cost function to be 1 if the change of direction is larger than alpha and 0 otherwise.
The user then supplies 3 real numbers between 0 and 1 to specify the importance of each navigating option.
So basically the cost function should be the sum of the three cost functions described above, each multiplied by the number supplied. My problem is that each cost function is of different units, for example, the first cost function is in kilometers and the third cost function is boolean (0 or 1).
How can I convert them so that it makes sense?
Define a cost function for each criteria that maps from a path to a real number.
f1(path) = cost associated with the distance of the path
f2(path) = cost of the time taken to traverse the path
f3(path) = cost of the complexity of the route
Defining f1 and f2 should be fairly straightforward. f3 is more complex and subjective but I suspect it really shouldn't be a boolean unless there's some very specific reason why you would need it to be. Perhaps the function for path complexity could be something like the sum of the number of degrees (radians) in every turn taken in the trip. There's certainly quite a few other choices for such a function that immediately come to mind, (for example the length of the representation required to describe the path). For f3 you will have to choose whatever one suits your purposes best.
Once you have defined the individual cost functions you could get an overall cost for the path by taking a linear combination of those 3 functions:
cost(path) = a1*f1(path) + a2*f2(path) + a3*f3(path)
Finding sensible values for a1, a2, a3 is most of the challenge. There are a few statistical methods you might want to use to do this.

Getting accurate graphite stats_counts

We have etsy/statsd node application running that flushes stats to carbon/whisper every 10 seconds. If you send 100 increments (counts), in the first 10 seconds, graphite displays them properly, like:
localhost:3000/render?from=-20min&target=stats_counts.test.count&format=json
[{"target": "stats_counts.test.count", "datapoints": [
[0.0, 1372951380], [0.0, 1372951440], ...
[0.0, 1372952460], [100.0, 1372952520]]}]
However, 10 seconds later, and this number falls to 0, null and or 33.3. Eventually it settles at a value 1/6th of the initial number of increments, in this case 16.6.
/opt/graphite/conf/storage-schemas.conf is:
[sixty_secs_for_1_days_then_15m_for_a_month]
pattern = .*
retentions = 10s:10m,1m:1d,15m:30d
I would like to get accurate counts, is graphite averaging the data over the 60 second windows rather than summing it perhaps? Using the integral function, after some time has passed, obviously gives:
localhost:3000/render?from=-20min&target=integral(stats_counts.test.count)&format=json
[{"target": "stats_counts.test.count", "datapoints": [
[0.0, 1372951380], [16.6, 1372951440], ...
[16.6, 1372952460], [16.6, 1372952520]]}]
Graphite data storage
Graphite manages the retention of data using a combination of the settings stored in storage-schemas.conf and storage-aggregation.conf. I see that your retention policy (the snippet from your storage-schemas.conf) is telling Graphite to only store 1 data point for it's highest resolution (e.g.10s:10m) and that it should manage the aggregation of those data points as the data ages and moves into the older intervals (with the lower resolution defined - e.g. 1m:1d). In your case, the data crosses into the next retention interval at 10 minutes, and after 10 minutes the data will roll up according the settings in the storage-aggregation.conf.
Aggregation / Downsampling
Aggregation/downsampling happens when data ages and falls into a time interval that has lower resolution retention specified. In your case, you'll have been storing 1 data point for each 10 second interval but once that data is over 10 minutes old graphite now will store the data as 1 data point for a 1 minute interval. This means you must tell graphite how it should take the 10 second data points (of which you have 6 for the minute) and aggregate them into 1 data point for the entire minute. Should it average? Should it sum? Depending on the type of data (e.g. timing, counter) this can make a big difference, as you hinted at in your post.
By default graphite will average data as it aggregates into lower resolution data. Using average to perform the aggregation makes sense when applied to timer (and even gauge) data. That said, you are dealing with counters so you'll want to sum.
For example, in storage-aggregation.conf:
[count]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
UI (and raw data) aggregation / downsampling
It is also important to understand how the aggregated/downsampled data is represented when viewing a graph or looking at raw (json) data for different time periods, as the data retention schema thresholds directly impact the graphs. In your case you are querying render?from=-20min which crosses your 10s:10m boundary.
Graphite will display (and perform realtime downsampling of) data according to the lowest-resolution precision defined. Stated another way, it means if you graph data that spans one or more retention intervals you will get rollups accordingly. An example will help (assuming the retention of: retentions = 10s:10m,1m:1d,15m:30d)
Any graph with data no older than the last 10 minutes will be displaying 10 second aggregations. When you cross the 10 minute threshold, you will begin seeing 1 minute worth of count data rolled up according to the policy set in the storage-aggregation.conf.
Summary / tldr;
Because you are graphing/querying for 20 minutes worth of data (e.g. render?from=-20min) you are definitely falling into a lower precision storage setting (i.e. 10s:10m,1m:1d,15m:30d) which means that aggregation is occurring according to your aggregation policy. You should confirm that you are using sum for the correct pattern in the storage-aggregation.conf file. Additionally, you can shorten the graph/query time range to less than 10min which would avoid the dynamic rollup.

Converting Real and Imaginary FFT output to Frequency and Amplitude

I'm designing a real time Audio Analyser to be embedded on a FPGA chip. The finished system will read in a live audio stream and output frequency and amplitude pairs for the X most prevalent frequencies.
I've managed to implement the FFT so far, but it's current output is just the real and imaginary parts for each window, and what I want to know is, how do I convert this into the frequency and amplitude pairs?
I've been doing some reading on the FFT, and I see how they can be turned into a magnitude and phase relationship but I need a format that someone without a knowledge of complex mathematics could read!
Thanks
Thanks for these quick responses!
The output from the FFT I'm getting at the moment is a continuous stream of real and imaginary pairs. I'm not sure whether to break these up into packets of the same size as my input packets (64 values), and treat them as an array, or deal with them individually.
The sample rate, I have no problem with. As I configured the FFT myself, I know that it's running off the global clock of 50MHz. As for the Array Index (if the output is an array of course...), I have no idea.
If we say that the output is a series of One-Dimensional arrays of 64 complex values:
1) How do I find the array index [i]?
2) Will each array return a single frequency part, or a number of them?
Thankyou so much for all your help! I'd be lost without it.
Well, the bad news is, there's no way around needing to understand complex numbers. The good news is, just because they're called complex numbers doesn't mean they're, y'know, complicated. So first, check out the wikipedia page, and for an audio application I'd say, read down to about section 3.2, maybe skipping the section on square roots: http://en.wikipedia.org/wiki/Complex_number
What that's telling you is that if you have a complex number, a + bi, you can picture it as living in the x,y plane at location (a,b). To get the magnitude and phase, all you have to do is find two quantities:
The distance from the origin of the plane, which is the magnitude, and
The angle from the x-axis, which is the phase.
The magnitude is simple enough: sqrt(a^2 + b^2).
The phase is equally simple: atan2(b,a).
The FFT result will give you an array of complex values. The twice the magnitude (square root of sum of the complex components squared) of each array element is an amplitude. Or do a log magnitude if you want a dB scale. The array index will give you the center of the frequency bin with that amplitude. You need to know the sample rate and length to get the frequency of each array element or bin.
f[i] = i * sampleRate / fftLength
for the first half of the array (the other half is just duplicate information in the form of complex conjugates for real audio input).
The frequency of each FFT result bin may be different from any actual spectral frequencies present in the audio signal, due to windowing or so-called spectral leakage. Look up frequency estimation methods for the details.

Resources