I have a question regarding wind speed metric in Here Weather Observation API.
Following is the link and example.
https://developer.here.com/documentation/weather/topics/example-weather-observation.html
I am not sure if you are looking for the definition of WindSpeed , if yes then here it is :
Wind speed in km/h or mph. If the element is in the response and it
contains a value, there is a Double in the String.
<windSpeed>1.85</windSpeed>
If the element is
in the response and it does not contain a value, there is an asterisk
(*) in the String.
<windSpeed>*</windSpeed>
for more detail about metric and these parameters you can go through the link :
https://developer.here.com/documentation/weather/topics/resource-report.html#resource-report__param-product
Hope this help !
Related
I'm trying to compute ( MACD - signal ) / signal of prices of Russel 1000 (which is an index of the 1000 US large cap stocks). I keep getting this error message and simply couldn't figure out why :
Error in EMA(c(49.85, 48.98, 48.6, 49.15, 48.85, 50.1, 50.85, 51.63, 53.5, :n = 360 is outside valid range: [1, 198]
I'm still relatively new in R although I'm proficient in Python. I suppose I could've used "try" to just work around this error, but I do want to understand at least what the cause of it is.
Without further ado, this is the code :
N<-1000
DF_t<- data.frame(ticker=rep("", N), macd=rep(NA,N),stringsAsFactors=FALSE)
stock<-test[['Ticker']]
i<-0
for (val in stock){dfpx=bdh(c(val), c("px_last"),start.date=as.Date("2018-1-
01"),end.date=as.Date("2019-12-30"))
macd<- MACD( dfpx[,"px_last"], 60, 360, 45, maType="EMA")
num<-dim(macd)[1]
ma<-(macd[num,][1]-macd[num,][2])/macd[num,][2]
i=i+1
DF_t[i,]<-list(val,ma)
}
For your information,bdh() is a Bloomberg command to fetch historic data.dfpx[] is a dataframe.MACD() is a function that takes a time series of prices and outputs a matrix,where the first column are the MACD values and the second column are the signal values.
Thank you very much! Any advice would be really appreciated. Btw, the code works with a small sample of a few stocks but it will cause the error message when I try to apply it to the universe of one thousand stocks. In addition, the number of data points is about 500, which should be large enough for my setup of the parameters to compute MACD.
Q : "...and error handling"
If I may add a grain of salt onto this, the error-prevention is way better than any ex-post error-handling.
For this, there is a cheap, constant O(1) in both [TIME]- and [SPACE]-Domains step, that principally prevents any such error-related crashes :
Just prepend to the instantiated the vector of TimeSERIES data with that many constant and process-invariant value cells, that make it to the maximum depth of any vector-processing, and any such error or exception is principally avoided :
processing-invariant value, in most cases, is the first know value, to be repeated that many times, as needed back, in the direction of time towards older ( not present ) bars ( yes, not relying on NaN-s and how NaN-s might get us in troubles in methods, that are sensitive to missing data, which was described above. Q.E.D. )
For those who are interested, I have found the cause of the error: some stocks have missing prices. It's simple like that. For instance, Dow US Equity has only about 180 daily prices (for whatever reason) over the past one and a half years, which definitely can't be used to compute a moving average of 360 days.
I basically ran small samples till I eventually pinpointed what caused the error message. Generally speaking, unless you are trying to extract data of above 6,000 stocks or so and you are querying say 50 fields, you are Okay. A rule of thumb for daily usage limit of Bloomberg is said to be around 500,000 for a school console. A PhD colleague working in a trading firm also told me professional consoles of Bloomberg are more forgiving.
I am calculating the elevation to obtain a graph with the api of nokia and the value of elevation that is returning me in a point near the sea is of 44.0 (meters?). If I get the elevation of that point from other online map services, it returns me 3 meters. What happens with the value returned by Nokia?
I am using the routing API (https://route.api.here.com/routing/7.2/calculateroute.json) with the following parameters "returnelevation=true&routeAttributes=all&representation=navigation&mode=fastest;car" and the response is the following information.
and the json response is this
"shape":[
"36.8142901,-2.5617902,44.0",
"36.814338,-2.5617993,44.0",
"36.8146276,-2.562207,45.0",
"36.8153036,-2.5631618,49.0",
"36.8153572,-2.5632262,50.0",
"36.8154109,-2.5632584,50.0",
"36.8155181,-2.5632906,51.0",
"36.8155932,-2.5633442,52.0",
"36.815722,-2.5636446,56.0",
"36.8158185,-2.5635159,56.0"
],
Thank you for raising the issue with us. We have informed our data team to check this. You can also report data discrepancies in https://mapcreator.here.com/
Here is an example of an entry in Time To Sample Box (stts) in a Mp4 file:
Entry{count=1, delta=4147}
How they calculate the value of delta. I guess it involves I or B frames. But I didn't find any instruction. Can anyone help me?
In mdhd there is Time Scale atom that is the number of time units that pass per second. You can use it to calculate times.
I am trying to understand how Graphite treats over samples. I read the documentation but could not find the answer.
For example, If I specify in Graphite that the retention policy should be 1 sample in 60 seconds and graphite receives something like 200 values in 60 seconds, what will be stored exactly ? Will graphite take an average or a random point in those 200 points ?
Short answer: it depends on the configuration, default is to take the last one.
Long answer, Graphite can configure, using regexp a strategy to aggregate several points in one sample.
These strategies are configured in storage-aggregations.conf file, using regexp to select metrics:
[all_min]
pattern = \.min$
aggregationMethod = min
This example conf, will aggregate points using their minimum.
By default, the last point to arrive wins.
This strategy will always be used to aggregate from higher resolutions to lower resolutions.
For example, if storage-schemas.conf contains:
[all]
pattern = .*
retentions = 1s:8d,1h:1y
Given the sum aggregation method, all points arrived for the same second will be summed and stored with a second resolution.
Points older than 8 days will be summed again to one hour resolution.
The aggregation configuration only applies when moving from archive i to archive i+1. For oversampling, it's always pick the last sample in the period.
The recommendation is to match sampling rate with the configuration.
see graphite issue
is there a way to use WMS->GetFeatureInfo specifying a TIME period (eg: 2014-01-01/2014-03-01) to extract a series of values from a raster layer loaded from a GeoServer instance?
Thanks in advance
Not at the moment, no. It may be added in the future though, it's not the first time I hear this request. I don't have a ETA, it depends on when funding to work on it shows up.
In the meantime, a somewhat complex workaround might be to configure the image mosaic index as a WFS feature type, query it by date, figure out the exact time values intersected by the interval, and then do N GetFeatureInfo requests, one for each of those values.