I would like my conky system monitor to show a time series of rain forecast probabilities as a graph. I request the forecast data via the darksky API and format it to csv with jq like this
curl "https://api.darksky.net/forecast/<myapikey>/<mylat>,<mylon>" |
jq '.minutely.data | map([.time, .precipProbability] | join(",")) | join("\n") ' |
sed 's/"//g' | sed 's/\\n/\n/g'
which produces output like this
1552253100,0
1552253160,0
1552253220,0
1552253280,0
1552253340,0
1552253400,0.01
1552253460,0.03
...
Is there a way to display this data in conky with ${execgraph ...} or similar? As far as I understand, you can only pass a single value at a time to update execgraph, but I want to display an entire time series at once.
At the moment I pass the data to gnuplot, produce a graph and include it in conky as an ${image ...} which works alright, but perhaps there is a native conky solution.
If displaying the probabilities from when conky starts and thereafter is sufficient, you could use ${execgraph ...} and just pass the latest value in the series each time conky updates.
Related
I created a fusion sheet data to be synced to the data set. now, I want to use that data set for creating a dictionary in the repo. I am using pyspark in the repo. later I want to use that dictionary to be passed so that it populates descriptions as it is in Is there a tool available within Foundry that can automatically populate column descriptions? If so, what is it called?.
it would great if anyone can help me creating the dictionary from data set using pyspark in the repo.
The following code would convert your pyspark dataframe into a list of dictionaries:
fusion_rows = map(lambda row: row.asDict(), fusion_df.collect())
However, in your particular case, you can use the following snippet:
col_descriptions = {row["column_name"]: row["description"] for row in fusion_df.collect()}
my_output.write_dataframe(
my_input.dataframe(),
column_descriptions=col_descriptions
)
Assuming your Fusion sheet would look like this:
+------------+------------------+
| column_name| description|
+------------+------------------+
| col_A| description for A|
| col_B| description for B|
+------------+------------------+
In my angular application I am tracking filters that users utilize on one of the pages. What I can later see in Logs, is the following (query for last 24 hours)
What I am interested in, is the count of filters groupped by its name. So I created the following query:
However the problem as you can see, is that my y-axis starts from 1 instead of 0. To users this looks like the last two filters don't have any values, where in reality they both have count of 1.
I have tried to use ymin=0 together with render function, however it did not work (chart still starts from 1). Then I have read I need to use make-series() function and so I tried:
customEvents
| where timestamp >= ago(24h)
| where customDimensions.pageName == 'product'
| make-series Count=count(name) default=0 on timestamp from datetime(2019-10-10) to datetime(2019-10-11) step 1d by name
| project name, Count
However the result is some weird matrix instead of a regular table:
I have just started with application insights thus any help in respect to this matter would be more than appreciated. Thank you
in Workbooks in application insights you could do almost this query (see below for a simplification?), then use the chart settings and set the axis min/max explicitly:
but why are you using make-series but then summarizing to just one series?
in this specific case is summarize simpler:
customEvents
| where timestamp between(datetime(2019-10-10) .. datetime(2019-10-11))
| where customDimensions.pageName == 'product'
| summarize Count=count(name) by name
| render barchart
in the logs blade (where you are), you could do this query, and I believe you can use
render barchart title="blah" ymin=0
(at some point workbooks will be able to "see" all the rendeer options like ymin/ymax/xmin/xmax/title/etc, but right now they're all stripped out at service layer)
A bit late to the party, but the correct syntax to pass in ymin and ymax when using a query is this:
| ...
| render barchart with (ymin=0, ymax=100)
See https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/renderoperator?pivots=azuremonitor
Is it possible to graph the query resolution time of bind9 in munin?
I know there is a way to graph it in a unbound server, is it already done in bind? If not how do I start writing a munin plugin for that? I'm getting stats from http://127.0.0.1:8053/ in the bind9 server.
I don't believe that "query time" is a function of BIND. About the only time that I see that value (with individual lookups) is when using dig. If you're willing to use that, the following might be a good starting point:
#!/bin/sh
case $1 in
config)
cat <<'EOM'
graph_title Red Hat Query Time
graph_vlabel time
time.label msec
EOM
exit 0;;
esac
echo -n "time.value "
dig www.redhat.com|grep Query|cut -d':' -f2|cut -d\ -f2
Note that there's two spaces after the "-d\" in the second cut statement. If you save the above as "querytime" and run it at the command line, output should look something like:
root#pi1:~# ./querytime
time.value 189
root#pi1:~# ./querytime config
graph_title Red Hat Query Time
graph_vlabel time
time.label msec
I'm not sure of the value in tracking the above though. The response time can be affected: if the query is an initial lookup, if the answer is cached locally, depending on server load, depending on intervening network congestion, etc.
Note: the above may be a bit buggy as I've written it on the fly, but it should give you a good starting point. That it returned the above output is a good sign.
In any case, recommend reading the following before you write your own: http://munin-monitoring.org/wiki/HowToWritePlugins
Background
I am trying to setup my trade analysis environment. I am running some rule based strategies on futures on different brokers and trying to aggregate trades from different brokers in one place. I am using blotter package as my main tool for analysis.
Idea is to use blotter and PerformanceAnalytics for analysis of live performance of various strategies I am running.
Problem at hand
My source of future EOD data is CSIData. All the EOD OHLC prices for these futures are stored in CSV format in following directory structure. For each future there is seperate directory and each contract of the future has one csv file with OHLC price series.
|
+---AD
| AD_201203.TXT
| AD_201206.TXT
| AD_201209.TXT
| AD_201212.TXT
| AD_201303.TXT
| AD_201306.TXT
| AD_201309.TXT
| AD_201312.TXT
| AD_201403.TXT
| AD_201406.TXT
| AD_54.TXT
...
+---BO2
| BO2195012.TXT
| BO2201201.TXT
| BO2201203.TXT
| BO2201205.TXT
| BO2201207.TXT
| BO2201208.TXT
| BO2201209.TXT
| BO2201210.TXT
| BO2201212.TXT
| BO2201301.TXT
...
I have managed to define root contracts for all the futures (e.g. in above case AD, BO2 etc) I will be using in FinancialInstrument with CSIData symbols as primary identifiers.
I am now struggling on how to define all the actual individual future contracts (e.g. AD_201203, AD_201206 etc) and setup their lookup using setSymbolLookup.FI.
Any pointers on how to do that?
To setup individual future contracts, I looked into ?future_series and ?build_series_symbols, however, the suffixes they support seem to be only of Future Month code format. So I have a feeling I am left with setting up each individual future contract manually. e.g.
build_series_symbols(data.frame(primary_id=c('ES','NQ'), month_cycle=c('H,M,U,Z'), yearlist = c(10,11)))
[1] "ESH0" "ESM0" "ESU0" "ESZ0" "NQH0" "NQM0" "NQU0" "NQZ0" "ESH1" "ESM1" "ESU1" "ESZ1" "NQH1" "NQM1" "NQU1" "NQZ1"
I have no clue where to start digging for my second part of my question i.e. setting price lookup for these futures from CSI.
PS: If this is not right forum for this kind of question, I am happy to get it moved to right section or even ask on totally different forum altogether.
PPS: Can someone with higher reputation tag this question with FinancialInstrument and CSIdata? Thanks!
The first part just works.
R> currency("USD")
[1] "USD"
R> future("AD", "USD", 100000)
[1] "AD"
Warning message:
In future("AD", "USD", 1e+05) :
underlying_id should only be NULL for cash-settled futures
R> future_series("AD_201206", expires="2012-06-18")
[1] "AD_201206"
R> getInstrument("AD_201206")
primary_id :"AD_201206"
currency :"USD"
multiplier :1e+05
tick_size : NULL
identifiers: list()
type :"future_series" "future"
root_id :"AD"
suffix_id :"201206"
expires :"2012-06-18"
Regarding the second part, I've never used setSymbolLookup.FI. I'd either use setSymbolLookup directly, or set a src instrument attribute if I were going to go that route.
However, I'd probably make a getSymbols method, maybe getSymbols.mycsv, that knows how to find your data if you give it a dir argument. Then, I'd just setDefaults on your getSymbols method (assuming that's how most of your data are stored).
I save data with saveSymbols.days(), and use getSymbols.FI daily. I think it wouldn't be much effort to tweak getSymbols.FI to read csv files instead of RData files. So, I suggest looking at that code.
Then, you can just
setDefaults("getSymbols", src="mycsv")
setDefaults("getSymbols.mycsv", dir="path/to/dir")
Or, if you prefer
setSymbolLookup(AD_201206=list(src="mycsv", dir="/path/to/dir"))
or (essentially the same thing)
instrument_attr("AD_201206", "src", list(src="mycsv", dir="/path/to/dir")
I want to extract some hourly data from rrdtool databases in order to create some graphs within a dashboard system.
These databases don't have an hourly datasource, the closest is a 30-min datasource (they are generated by munin)
Now, I can use rrdfetch, but that doesn't do the nice averaging that rrdgraph would do, so something like this
rrdtool fetch xxx-apache_accesses-accesses80-d.rrd AVERAGE \
--resolution 3600 -s 1328458200 -e 1328544600
Might give me 30 min data points like this
2012-Feb-05 16:30:00 3.5376357135e+00
2012-Feb-05 17:00:00 3.4655067194e+00
2012-Feb-05 17:30:00 4.0483210375e+00
2012-Feb-05 18:00:00 4.3210061422e+00
....
I could average those, but it seems that rrdgraph can output parsable text, but I can't figure out the correct incantation. Here's what I've tried
rrdtool graph dummy.png -s 1328523300 -e 1328609700 \
DEF:access=xxx-apache_accesses-accesses80-d.rrd:42:AVERAGE \
"PRINT:access:AVERAGE: %5.1lf %S"
outputs
0x0
4.7
Now I think that's simply the average for the period given, but is there any way to get rrdtool to spit out an average for particular chunks or step sizes? I tried --step but this did not change the output.
I could call rrdtool graph for each data point I need, but that seems rather wasteful.
No sooner had I posted than I hit upon the right approach!
rrdtool xport -s 1328523300 -e 1328609700 --step 3600 \
DEF:access=xxx-apache_accesses-accesses80-d.rrd:42:AVERAGE \
XPORT:access:"average"
This gives me the dump I need...
<?xml version="1.0" encoding="ISO-8859-1"?>
<xport>
<meta>
<start>1328526000</start>
<step>3600</step>
<end>1328612400</end>
<rows>25</rows>
<columns>1</columns>
<legend>
<entry>average</entry>
</legend>
</meta>
<data>
<row><t>1328526000</t><v>2.1949556516e+00</v></row>
<row><t>1328529600</t><v>2.0074586816e+00</v></row>
<row><t>1328533200</t><v>2.4574720485e+00</v></row>
<row><t>1328536800</t><v>3.4861890250e+00</v></row>
<row><t>1328540400</t><v>4.2725023347e+00</v></row>
<row><t>1328544000</t><v>6.2119548259e+00</v></row>
<row><t>1328547600</t><v>5.6709432075e+00</v></row>
<row><t>1328551200</t><v>6.1214185470e+00</v></row>
<row><t>1328554800</t><v>8.1137357347e+00</v></row>
<row><t>1328558400</t><v>5.8345894022e+00</v></row>
<row><t>1328562000</t><v>6.2264732776e+00</v></row>
<row><t>1328565600</t><v>6.1652113350e+00</v></row>
<row><t>1328569200</t><v>5.8851025574e+00</v></row>
<row><t>1328572800</t><v>5.4612112119e+00</v></row>
<row><t>1328576400</t><v>6.3908056120e+00</v></row>
<row><t>1328580000</t><v>6.0361776174e+00</v></row>
<row><t>1328583600</t><v>6.3164590113e+00</v></row>
<row><t>1328587200</t><v>6.0902986521e+00</v></row>
<row><t>1328590800</t><v>4.6756445168e+00</v></row>
<row><t>1328594400</t><v>3.9461916905e+00</v></row>
<row><t>1328598000</t><v>2.9449490046e+00</v></row>
<row><t>1328601600</t><v>2.4011760751e+00</v></row>
<row><t>1328605200</t><v>2.2187817639e+00</v></row>
<row><t>1328608800</t><v>2.1775208736e+00</v></row>
<row><t>1328612400</t><v>NaN</v></row>
</data>
</xport>