This is probably more complicated as it sounds, at least with Grafana.
I have an experiment, where for every location (1-100) a value is changed over time. I want to show this with a line graph (or a bar graph), where x-axis correspond to the locations (1-100) and y-axis correspond to the average value for this location for the time interval that is set in Grafana in the upper right corner. Data comes from database. Please, suggest me, which type of graph (dashboard) should I choose in Grafana to achieve the goal. I can only see two kinds of them, those with time on x-axis and those of type histogram but none seems to be applicable.
seems Grafana builtin panels only support
time series, which mean x-axis must be 'time' type.
bilibala-echarts-panel
I use this third-part panel bilibala-echarts-panel
https://grafana.com/grafana/plugins/bilibala-echarts-panel/
achieved the goal: x-axis not use time value.
It can use a custom callback function, to handle data and render
setting
in grafana query setting:
format as time series
// time series need 3 column:
time // time or number (can be parse as time)
metric // string: series name
value // number (otherwise got value error)
assign the value to time column.
// it may rander as 1970-01 in table view, but in echart we can read it as number
in echart panel option:
data.series got the grafana data
// convert/adapte it to echart data here.
maybe format as table can have more flexible data,
and parse them in the callback js.
summary
grafana and echart have different theory,
have to understand both of them ,
and do the convert in js.
// bilibala-echarts-panel using echart v4 #2022-07
We have implemented the graph using natel-plotly-panel plugin.
"panels": [
{
"pconfig": {
"traces": [
{
"mapping": {
"color": "64",
"size": null,
"text": "metriccat",
"x": "loccat",
"y": "valuecat",
"z": null
},
"name": "My value",
"show": {
"line": true,
"lines": true,
"markers": false
}
},
...
]
},
"pluginVersion": "7.5.5",
"targets": [
{
"format": "table",
"group": [],
"metricColumn": "none",
"rawQuery": true,
"rawSql": "SELECT\n avg(column_with_values) AS valuecat,\n loc loccat,\n avg(column_with_values) as metriccat\nFROM ... \nWHERE\n $__timeFilter(timestamp)\nGROUP BY loc\n\n",
"select": [
[
{
"params": [
"value"
],
"type": "column"
}
]
],
"timeColumn": "time",
"where": [
{
"name": "$__timeFilter",
"params": [],
"type": "macro"
}
]
},
...
],
"title": "Average Values Along the Locations",
"type": "natel-plotly-panel",
"version": 1
}
]
Related
When retrieving reverse isolines based on time with a list of ranges does anyone know the behavior?
For example if range is 50,100,150,200,250,300,350,400 the polygon for 50 is much different then if range is 50,100,150.
Based on the two parameter sets below the range is extremely different for the 30 second revere isoline range. The two calls occurred at the same time.
For https://isoline.route.ls.hereapi.com/routing/7.2/calculateisoline.json?apiKey=xxxx&mode=balanced;car;traffic:default;motorway:-3&rangeType=time&destination=geo!43.805388,-79.525348&range=30,1800.
The polygon is:
"isoline": [{
"range": 30,
"component": [{
"id": 0,
"shape": ["43.8079834,-79.5238495",
"43.8066101,-79.5204163",
"43.8038635,-79.5204163",
"43.8024902,-79.5245361",
"43.8038635,-79.528656",
"43.8066101,-79.5293427",
"43.8079834,-79.5272827",
"43.8079834,-79.5238495"]
}]
}
For https://isoline.route.ls.hereapi.com/routing/7.2/calculateisoline.json?apiKey=xxxx&mode=balanced;car;traffic:default;motorway:-3&rangeType=time&destination=geo!43.805388,-79.525348&range=30.
The polygon is:
"isoline": [{
"range": 30,
"component": [{
"id": 0,
"shape": ["43.8059235,-79.5258236",
"43.8059235,-79.5245361",
"43.8057518,-79.5240211",
"43.8054085,-79.5240211",
"43.8050652,-79.5250511",
"43.8047218,-79.5253944",
"43.8047218,-79.5257378",
"43.8054085,-79.5264244",
"43.8057518,-79.5265102",
"43.8059235,-79.5262527",
"43.8059235,-79.5258236"]
}]
}]
The behaviour is same as single range, just that multiple range allows calculation of many isolines with the same start or destination.
Check this link for your reference.
https://developer.here.com/documentation/isoline-routing-api/dev_guide/topics/use-cases/multi-range-isoline.html
I am using curl with REST to access Smartshets in my C# running on WIN CE. My application is supposed to dump some data on smartsheet periodically.
Before I write to a sheet, I would like to know the total row count in the sheet so that I don't exceed 5000 rows per sheet.
I am looking for an API that would return just row count given the sheet id?
Currently using below API which returns the entire sheet data and takes very long to fetch and format.
curl https://api.smartsheet.com/2.0/sheets/{sheetId}
with data of upto 5000 rows pr sheet, it takes very long to fetch and format below response to determine the available rows:
{
"id": 4583173393803140,
"name": "sheet 1",
"version": 6,
"totalRowCount": 240,
"accessLevel": "OWNER",
"effectiveAttachmentOptions": [
"EVERNOTE",
"GOOGLE_DRIVE",
"EGNYTE",
"FILE",
"ONEDRIVE",
"DROPBOX",
"BOX_COM"
],
"readOnly": true,
"ganttEnabled": true,
"dependenciesEnabled": true,
"resourceManagementEnabled": true,
"cellImageUploadEnabled": true,
"userSettings": {
"criticalPathEnabled": false,
"displaySummaryTasks": true
},
"userPermissions": {
"summaryPermissions": "ADMIN"
},
"workspace": {
"id": 825898975642500,
"name": "New Workspace"
},
"projectSettings": {
"workingDays": [
"MONDAY",
"TUESDAY",
"WEDNESDAY"
],
"nonWorkingDays": [],
"lengthOfDay": 8
},
"hasSummaryFields": false,
"permalink": "https://app.smartsheet.com/b/home?lx=pWNSDH9itjBXxBzFmyf-5w",
"createdAt": "2018-09-24T20:27:57Z",
"modifiedAt": "2018-09-26T20:45:08Z",
"columns": [
{
"id": 4583173393803140,
"version": 0,
"index": 0,
"primary": true,
"title": "Primary Column",
"type": "TEXT_NUMBER",
"validation": false
},
{
"id": 2331373580117892,
"version": 0,
"index": 1,
"options": [
"new",
"in progress",
"completed"
],
"title": "status",
"type": "PICKLIST",
"validation": true
}
],
"rows": Array[4962]....
}
Any help will b greatly appreciated.enter code here
There isn't a request to specifically return the number of rows on a Sheet. But, with any GET /sheets/{sheetId} operation the resulting Sheet object returned will have a top level totalRowCount attribute on it. So, you don't have to GET the sheet and count the objects in the rows array. Instead you can look to the totalRowCount attribute and the value there to know how many rows are currently on the sheet.
If you are concerned about pulling down all of the Sheet data you can use paging to keep from getting all of the data returned. Doing a GET /sheets/{sheetId}?pageSize=1 will give you the Sheet object with only the first row of data to help make the payload smaller. The totalRowCount attribute will still be present in the response.
I have discrepancies in the revenue metric, between the data I collect from the Google Analytics API and the custom reports in the user interface.
The discrepancies for each value maintain the same rate, where the data collected through the API is greater than the data in the custom reports.
This is the body of the request I'm using:
{
"reportRequests":[
{
"viewId":"xxxxxxxxxx",
"dateRanges": [{"startDate":"2017-07-01","endDate":"2018-12-31"}],
"metrics": [
{"expression": "ga:transactionRevenue","alias": "transactionRevenue","formattingType": "CURRENCY"},
{"expression": "ga:itemRevenue","alias": "itemRevenue","formattingType": "CURRENCY"},
{"expression": "ga:productRevenuePerPurchase","alias": "productRevenuePerPurchase","formattingType": "CURRENCY"}
],
"dimensions": [
{"name": "ga:channelGrouping"},
{"name": "ga:sourceMedium"},
{"name": "ga:dateHour"},
{"name": "ga:transactionId"},
{"name": "ga:keyWord"}
],
"pageSize": "10000"
}]}
This is an extract of the response:
{{
"reports": [
{
"columnHeader": {
"dimensions": [
"ga:channelGrouping",
"ga:sourceMedium",
"ga:dateHour",
"ga:transactionId",
"ga:keyWord"
],
"metricHeader": {
"metricHeaderEntries": [
{
"name": "transactionRevenue",
"type": "CURRENCY"
},
{
"name": "itemRevenue",
"type": "CURRENCY"
},
{
"name": "productRevenuePerPurchase",
"type": "CURRENCY"
}
]
}
},
"data": {
"rows": [
{
"dimensions": [
"(Other)",
"bing / (not set)",
"2018052216",
"834042319461-01",
"(not set)"
],
"metrics": [
{
"values": [
"367.675436",
"316.55053699999996",
"316.55053699999996"
]
}
]
},
...
So, if I create a custom report in the Google Analytics user interface and look for the transaction ID 834042319461-01, I get the following result:
google Analytics custom report filtered by transaction id 834042319461-01
In the end I have a revenue value of 367.675436 in the API response, but a value of 333.12 in the custom report, its a 10.37% more in the value of the API. I get this 10.37% increase for all values.
¿Why I'm having these discrepance?
¿What would you recomend to do in order to solve these problem?
Thanks.
My bet is that you're experiencing sampling (is your time range in the UI lower than in the API?): https://support.google.com/analytics/answer/2637192?hl=en
Sampling applies when:
you customize the reports
the number of sessions for the overall time range of the report (whether or not your query returns less sessions) exceeds 500K (GA) or 100M (GA 360)
The consequence is that:
the report will be based on a subset of the data (the % depends on the total number of sessions)
therefore your report data won't be as accurate as usual
What you can do to reduce sampling:
increase sample size (will only decrease sampling to a certain extend, but in most cases won't completely remove sampling). In UI it's done via the option at the top of the report, in the API it's done using the samplingLevel option
reduce time range
create filtered views so your reports contain the data you need without needed to customize reports
Because you are looking at a particular transaction ID, this might not be a sampling issue.
If the ratio is consistent, from your question it seems to be 10.37%. I believe this is the case of currency that you are using.
Try using local currency metric API calls when making monetary based calls.
For example -
ga:localTransactionRevenue instead of ga:transactionRevenue
I'm trying to archieve something like this :
example, using kibana and/or Vega/Vega-lite.
The csv file I used to add the index to kibana was:
student1,90,80,85,95
student2,50,60,55,100
student3,40,70,50,60
At the moment I have this:
{
"$schema": "https://vega.github.io/schema/vega-lite/v2.json",
"data": {
"url": {
%context%: true,
"index":"grades",
"body":{
"size":5
"_source":["StudentName","test1","test2","test3","test4"]
}
},
"format":{"property":"hits.hits"}
},
"mark": "line",
"encoding": {
"x": {"field": "_source.test1", "type": "quantitative"},
"y": {"field": "_source.StudentName", "type": "nominal"}
}
}
So my problem is trying to archieve what is on the picture. I know the "encoding" section of my Vega code isn't correct but I'm having problem finding a way of having multiple parameters in X-axis.
I think this : vega example
would do the trick if i managed to replace the hardcoded values in data with the data from the kibana index. Is there any way I can use the "_source.fields" inside the "values" or is any option in encoding that I can use in order to archieve my result?
Thanks in advance.
Note: My end result most likely only have 1 student. But I want that the visualization to be updated in real-time, therefore the need to use the field.
You posed your question here and answers have been posted - https://github.com/vega/vega/issues/1229#issuecomment-379593878
I need to get from Omniture real time API a classify eVar, exclude some value, and then breackdown its with sitesection.
I try with this query:
{
"reportDescription": {
"source": "realtime",
"reportSuiteID": "**RSID**", //MY REPORT SUITE
"metrics": [{
"id": "instances"
}],
"elements": [{
"id": "evar", //MY EVAR
"top": 100,
"classification": "Real Time", //CLASSIFICATION NAME
"search": {
"type": "NOT",
"keywords": ["somevalue"] //THE VALUE TO EXCLUDE
}
},{
"id" : "sitesection",
"top" : 1
}],
"dateGranularity": "minute:1",
"dateFrom": "-1 minute"
}
}
But in the JSON response I see "somevalue" how if it not excluded.
The strange thing is that if I remove the "breakdown" (with sitesection) the classification filter seems to works fine.
I can't use classification filter if a breackdown is used in real time report? I can't find any documentation about that.
An other thing is that if I request a report with the classification, without any search, I receve the response but there is a lot of "::Unspecified::". The problem is that the "::Unspecified::" seems to be the last datas that Omniture receves form my webpages. I think this means that classifications are not in real time, also if you can to use it in real time report.