I have some data in the following format:
{
timestamp: Date;
a: number;
b: number;
}
I'm able to create a metric sum(a), another metric sum(b), how to define the third metric std_deviation(sum(a)/sum(b)) in the Kibana webpage?
It turns out Kibana doesn't support Bucket script, so it is not possible to define a metric based on other metrics.
Related
I am testing an endpoint in Postman using a url like this, {{api_url}}/stackoverflow/help/{{customer_id}}/{{client_id}}.
I have the api_url, customer_id, and client_id stored in my environment variables. I would like to test multiple customer_id and client_id without having to change the environment variables manually each time. I created a csv to store a list of customer_id and one to store client_id. When I go to run collection, it will only allow me to add one file. Is there another way to do this if I want to iterate through my tests to automate them?
You can add both customer_id & client_id in one csv file. Postman will iterate n times (n = number of csv lines, except header)
you can use postman.setNextRequest to control the flow. The below code runs the request with different values in the arr variable
url:
{{api_url}}/stackoverflow/help/{{customer_id}}/{{client_id}}
now add pre-request:
// add values for the variable in an array
const tempArraycustomer_id = pm.variables.get("tempArraycustomer_id")
const tempArrayclient_id = pm.variables.get("tempArrayclient_id")
//modify the array to the values you want
const arrcustomer_id = tempArraycustomer_id ? tempArraycustomer_id : ["value1", "value2", "value3"]
const arrclient_id = tempArrayclient_id ? tempArrayclient_id : ["value1", "value2", "value3"]
// testing variable to each value of the array and sending the request until all values are used
pm.variables.set("customer_id", arrcustomer_id.pop())
pm.variables.set("client_id", arrclient_id.pop())
pm.variables.set("tempArraycustomer_id", arrcustomer_id)
pm.variables.set("tempArrayclient_id", arrclient_id)
//end iteration when no more elements are there
if (arrcustomer_id.length !== 0) {
postman.setNextRequest(pm.info.requestName)
}
I have the following ALV report generated from the RFKSLD00 program:
I need to generate a report based on the above report like this one (as part of my work):
Any ideas how to do this? I am not asking for a solution but some steps on how to achieve this.
Each line of the original report is one line of your report, you need just to adjust the sums for local currency, i.e. multiply all values by local currency rate. The yellow totals lines shouldn't confuse you, they are generated by grid, not by report.
The only thing that is missing in original report is debit and credit of balance carryforward, I suppose in the original you have already reconciliated value. To get separate values for it you need inspecting the code.
The initial step would be to declare final structure and table based on it:
TYPES: BEGIN OF ty_report,
rec_acc TYPE skont,
vendor TYPE lifnr,
...
jan_deb TYPE wrbtr,
jan_cred TYPE wrbtr,
febr_deb TYPE wrbtr,
febr_cred TYPE wrbtr,
...
acc_bal_deb TYPE wrbtr,
acc_bal_cred TYPE wrbtr,
END OF ty_report,
tt_report TYPE TABLE OF ty_report.
DATA: lt_report TYPE tt_report.
Then you only need looping original report internal table and fill your final structure, not missing currency conversion:
select single * from tcurr
into #DATA(tcurr)
where fcurr = 'EUR'
and tcurr = 'AUD'. "<- your local currency
DATA(lv_ukurs) = tcurr-ukurs.
LOOP AT orig_table INTO DATA(orig_line).
APPEND INITIAL LINE INTO lt_report ASSIGNING FIELD-SYMBOL(<fs_rep>).
MOVE-CORRESPONDING orig_line TO <fs_rep>.
CASE orig_line-monat. "<- your period
WHEN '01'.
<fs_rep>-jan_deb = orig_line-debit.
<fs_rep>-jan_cred = orig_line-credit.
WHEN '02'.
<fs_rep>-febr_deb = orig_line-debit.
<fs_rep>-febr_cred = orig_line-credit.
...
ENDCASE.
DO 30 TIMES.
ASSIGN COMPONENT sy-index OF STRUCTURE <fs_rep> TO FIELD-SYMBOL(<field>).
CHECK sy-subrc = 0.
DESCRIBE FIELD <field> TYPE DATA(TYP).
CHECK TYP = 'P'.
CALL FUNCTION 'CONVERT_TO_LOCAL_CURRENCY'
EXPORTING
DATE = sy-datum
FOREIGN_CURRENCY = 'EUR'
LOCAL_CURRENCY = 'AUD'
FOREIGN_AMOUNT = <field>
TYPE_OF_RATE = 'M'
IMPORTING
EXCHANGE_RATE = lv_ukurs
LOCAL_AMOUNT = <field>.
ENDDO.
ENDLOOP.
I recommend to name all components of your final structure ty_report the same as in original as much as possible. Thus you can maximally utilize MOVE-CORRESPONDING and avoid manual coding.
This is just quick shot and I may be missing some details and errors.
I have a dataset I'm working with that is buildings and electrical power use over time.
There are two aggregations on these buildings that are simple sums across the entire timespan and I have those written. They end up looking like:
var reducer = reductio();
// How much energy is used in the whole system
reducer.value("energy").sum(function (d) {
return +d.Energy;
});
These work great.
The third aggregation, however, is giving me some trouble. I need to find the point that the sum of all the buildings is at its greatest. I need the max of the sum and the time it happened.
I wrote:
reducer.value("power").sum(function (d) {
return +d.Power;
}).max(function (d) {
return +d.Power;
}).aliasProp({
time: function (d, v) {
return v.Timestamp;
}
});
But, this is not necessarily the biggest power use. I'm pretty sure this returns the sum and the time when any individual building used the most power.
So if the power values were 1, 1, 1, 15. I would end up with 18, when there might be a different moment when the values were 5, 5, 5, 5 for a total of 20. The 20 is what I need.
I am at a loss for how to get the maximum of a sum. Any advice?
Just to restate: You are grouping on time, so your group keys are time periods of some sort. What you want is to find the time period (group) for which power use is greatest.
If I'm right that this is what you want, then you would not do this in your reducer, but rather by sorting the groups. You can order groups by using the group.order method: https://github.com/crossfilter/crossfilter/wiki/API-Reference#group_order
// During group setup
group.order(function(p) { return p.power.sum; })
// Later, when you want to grab the top power group
group.top(1)
Reductio's max aggregation should just give you the maximum value that occurs within the group. So given a group with values 1,1,1,15, you would get back the value 15. It sounds like that's not what you want.
Hopefully I understood properly. If not, please comment. If you can put together an example with toy data that is public and where you can tell me what you would like to see vs what you are getting, I should be able to help out.
Update based on example:
So, what you want (based on the description in the example) is to find the maximum power usage for any given time within the selected time period. So you would do the following:
var timeDim = buildings.dimension(function(d) { return d.Timestamp })
var timeGrp = timeDim.group().reduceSum(function(d) { return d.Power })
var maxResults = timeGrp.top(1)
Whenever you want to find the max power usage time for your current filter, just call timeGrp.top(1) and the key of that group will be the time with the maximum power.
Note: Don't filter on timeDim as the filters on a dimension are not applied to groups defined on that dimension.
Here's an updated JSFiddle that writes out the maximum group to the console: https://jsfiddle.net/esjewett/1o3robm3/1/
I have apache logs with size of each request and the time , I need to plot graph on amount of data transfered per unit time.
A sample document looks like below
{
"#timestamp" : "2015-01-01T00:00:00",
"bytes" : 20
}
For each minute , I want to take sum of the bytes field and plot that over a graph in Kibana 3. Can anybody help me on this?
This can be done by editing the panel settings,as shown in the figure below:
"Values" dropdown: Select the value to be "total" from this.
Values Field : The field which is to be analysed. Here in your case it is "values"
"Time Field" : the timefield in your data,ie "#timestamp"
"interval" : set as "1m" here.
You should be able to do this using a histogram panel.
Under values, set 'chart value' to 'total' and use bytes as the value field
Check that 'time field' is #timestamp
Untick auto-interval and set the interval to be '1m'
We're displaying time series data (utilisation of a compute resource, sampled hourly over months) on a stacked area chart using D3.js:
d3.json("/growth/instance_count_1month.json", function( data ) {
data.forEach(function(d) {
d.datapoints = d.datapoints.map(
function(da) {
// NOTE i'm not sure why this needs to be multiplied by 1000
return {date: new Date(da[1] * 1000),
count: da[0]};
});
});
x.domain(d3.extent(data[0].datapoints, function(d) { return d.date; }));
y.domain([0,
Math.ceil(d3.max(data.map(function (d) {return d3.max(d.datapoints, function (d) { return d.count; });})) / 100) * 100
]);
The result is rather spiky for my tastes:
Is there an easy way to simplify the data, either using D3 or another readily available library? I want to reduce the spikiness, but also reduce the volume of data to be graphed, as it will get out of hand.
I have a preference for doing this at the UI level, rather than touching the logging routines (even though redundant JSON data will have to be transferred.)
You have a number of options, you need to decided what is the best way forward for the type of data you have and the needs of it been used. Without knowing more about your data the best I can suggest is re-sampling. Simply report the data at longer intervals ('rolling up' the data). Alternatively you could use a rolling average or look at various line smoothing algorithms.