I store time period of a certain operation in Graphite. In my Grafana dashboard I show all the points which are greater than 15 minutes. I also want to show the count of such incidents daily. Is it possible to do so in Graphite/Grafana without adding a new metric?
To only show points that have a value > 15 minutes ( 15 min = 900,000ms )
removeBelowValue(test.a.b.c, 900000)
To get a running count of hits from the above:
A: removeBelowValue(test.a.b.c, 900000)
B: integral(divideSeries(removeBelowValue(test.a.b.c, 900000), #A))
Once you have both series queries entered, you can click the eyeball next to the A series to hide it, as the value we care about is going to come from B.
The value of this series will be the number of instances the A query has been above 900000.
Related
I am trying to create a single vector from EEG sleep data at 256Hz that lists each sleep stage and event as they occur in chronological order. The goal is to be able to process sleep data as they occur in their individual stages, but by removing times when the individual is awake (so wake and arousal events need to be coded to be removed from analyses).
My problem is that my EEG program gives me events and stages in an overlapping type of format. For example, my readout of scored events lists Wake as starting at 0 and lasting 3270000000 (in microseconds). Then stage n1 might occur next and last for 3 second epochs (30000000). In the middle of this time period, we might have a microarousal that occurs at a very particular moment for a certain period of time and it will be listed after this sleep stage and then the next sleep stage or event will be listed.
Example of data in csv file:
enter image description here
You can see that the date and start time occurs in order and that the start time for sleep stages (marked by the hypnogram type) all start at 0 and the relative start time increases incrementally in relation to the duration. The duration for all sleep stages are in 30s increments (in microseconds). However, if we look at in the picture N2 lasts 360000000 (or 360ms/ or 12, 30s epochs). the relative time is noted that N2 starts at 20040000000 and N3 starts at 20400000000, exactly 360000000 µs later. the problem is in the arousals. They occur during N2 sleep with their own start times and durations.
So, I don't know how to effectivly insert all the time points chronologically into a single vector when there are overlapping events. Can anyone help me, please?
% Input CSV file
infile="C:\Users\jerom\OneDrive\Desktop\PSG_Scoring\101_C_Scoring.csv";
% Read the CSv file
[~,~,table]=xlsread(infile);
start_musec=cell2mat(table(2:end,5));
duration_musec=cell2mat(table(2:end,6));
events=table(2:end,10);
% Define event times course
event_name={'Wake' 'Lights' 'N1' 'N2' 'N3' 'REM' 'Arousal'};
sfreq_Hz=256;
tmax_samp=ceil(sfreq_Hz*(start_musec(end)+duration_musec(end))/1e6);
event_vectors=[];
event_vectors.time_sec=((1:tmax_samp)-1)/sfreq_Hz;
time_musec=(event_vectors.time_sec)*1e6;
for evt=1:numel(event_name)
event_vectors.(event_name{evt})=zeros(1,tmax_samp);
for line=1:numel(start_musec)
if contains(events(line),event_name{evt})
is_in_current_event=(time_musec >= start_musec(line)) & (time_musec <= start_musec(line)+duration_musec(line));
event_vectors.(event_name{evt})(is_in_current_event)=1;
end
end
end
i was able to convert the time points to frequencies, but I was expecting to be able to connect this into data points that could be concatenated into a single vector, but I end up with a table.
I am trying to create a simple revenue per person calc that works with different filters within the data. I have it working for a single record, however, it breaks and aggregates incorrectly with multiple records.
The formula I have now is simply Sum([Revenue]) / Sum([Attendance]). This works when I only have a single event selected. However, as soon as I select multiple shows it aggregates and doesn't do the weighted avg.
I'm making some assumptions here, but hopefully this will help you out. I've created an .xlsx file with the following data:
Event Revenue Attendance
Event 1 63761 6685
Event 2 24065 3613
Event 3 69325 4635
Event 4 41996 5414
Inside Tableu I've created the calculated column for Rev Per Person.
Finally, in the Analysis dropdown I've enabled Show Column Grand Totals. This gives me the following:
Simple Fix
The problem is that all of the column totals are being calculated using the SUM aggregation. This is the desired behavior for Revenue and Attendance, but for Rev Per Person, you want to display the average.
In Analysis/ Totals / Total All Using you can configure the default aggregation. Here we don't want to set all of them though; but it's useful to know. Leave that where it is, and instead click on the Rev Per Person Grand Total value and change it from 'Automatic' to 'Average'.
Now you'll see a number much closer to the expected.
But it's not exactly what you expect. The average of all the Rev Per Person values gives us $9.73; but if you take the total Revenue / total Attendance you'd expect a value of $9.79.
Slight More Involved Fix
First - undo the simple fix. We'll keep all of the totals at 'Default'. Instead, we'll modify the Rev Per Person calculation.
IF Size() > 1 THEN
// Grand Total
SUM([Revenue]/[Attendance])
ELSE
// Regular View
SUM([Revenue])/SUM([Attendance])
END
Size() is being used to determine if the calculation is being done for an individual cell or not.
More information on Size() and similar functions can be found on Tableau's website here - https://onlinehelp.tableau.com/current/pro/desktop/en-us/functions_functions_tablecalculation.html
Now I see the expected value of $9.79.
I have a series of data that increases by time and resets to zero at 18:00 every day. How can I make a Graphite plot that only contains datapoints at 17:59 in the last 30 days?
I have tried summarize(1d, max, false), but it by default bins data into buckets that are calculated by rounding to the nearest interval to current time. So I cannot specify the beginning time of each bucket to be 18:00.
I couldn't find anything that exactly matches what you want. There are functions like timeSlice and timeStack but they do not really fit.
An alternative is to use the graphite function nonNegativeDerivative. It ignores when counters are reset to zero and only shows counter increments.
We've got a system which writes some count metrics using a Counter from io.dropwizard.metrics java lib. The system is deployed once a day and our count metrics look in such way:
What I need is to summarize all that daily counts to a month value. Afterwards I'll show it as a single stat.
How can I do that?
Here is the query for the graph:
sumSeries(mysys.{prod,production}.{server1,server2,server3}.important.metric.count))
This will give you what you want for last 30 days - summarize(nonNegativeDerivative(mysys.{prod,production}.{server1,server2,server3}.important.metric.count)), "30d", "sum", false)
Or use consolidateBy(integral(sumSeries(nonNegativeDerivative(mysys.{prod,production}.{server1,server2,server3}.important.metric.count))), 'max') and set "Override relative time" in Grafana (it's on "Time range" tab in Graph's settings) for 30 days
I am feeding data into a metric, let say it is "local.junk". What I send is just that metric, a 1 for the value and the timestamp
local.junk 1 1394724217
Where the timestamp changes of course. I want to graph the total number of these instances over a period of time so I used
summarize(local.junk, "1min")
Then I went and made some data entries, I expected to see the number of requests that it received in each minute but it always just shows the line at 1. If I summarize over a longer period like 5 mins, It is showing me some random number... I tried 10 requests and I see the graph at like 4 or 5. Am I loading the data wrong? Or using the summarize function wrong?
The method summarize() just sums up your data values so co-relate and verify that you indeed are sending correct values.
Also, to localize weather the function or data has issues, you can run it on metricsReceived:
summarize(carbon.agents.ip-10-0-0-1-a.metricsReceived,"1hour")
Which version of Grahite are you running?
You may want to check your carbon aggregator settings. By default carbon aggregates data for every 10 seconds. Without adding any entry in aggregation-rules.conf, Graphite only saves last metric it receives in the 10second duration.
You are seeing above problem because of that behaviour. You need to add an entry for your metric in the aggregation-rules.conf with sum method like this
local.junk (10) = sum local.junk