I am using grafana to pull in graphite events and overlay them on graphs as annotations. This seems to work very inconsistently for me so I was hoping that someone might have an idea as to what I may be doing wrong.
I am able to see all of the events in the graphite dashboard so I know they are available.
When I create the annotation I am using Graphite event tags:
The one above seems to work as expected:
I added a second annotation and this one does not seem to show up at all. When I look at the network console in Chrome, both annotations are being fetched as expected but for some reason the second one is not added to the screen:
First network event (appears on graph):
[{"data": "Fixed issue with metrics not being collected properly for bamboo.", "what": "metrics bug fixed", "when": 1444197389.0, "id": 11, "tags": "bamboo_events"}]
Second network event (does not appear on graph):
[{"data": "Sync graphiteprod-c02 data to graphiteprod-c01", "what": "sync", "when": 1446665626.0, "id": 13, "tags": "testtag"}]
I have tried creating new a new dashboard that only has the second annotation defined and it does not show up there as well.
It looks like there might be a discrepancy between the graphite event epoch time and grafana.
For graphite it is returning 2015-11-04 08:33:46 as 1446662026.0
When compared to the current epoch time (1446651804) the graphite event is in the future. It seems that the time is showing about 5 hours in the future, might be some sort of issue with time zone conversion.
Related
I am working on a specific requirement to filter out any meetings that are going to start in next 15 mins of a given calendar.
I can see that there is timeMax query option which will give events starting before given time but the problem I am facing is that I am also getting older events (which are done in past). Any way to get records only froj now to next 15 mins?
I tried query using syncToken but I guess that doesnt works with timeMax so not able to get just the delta and instead getting all the events.
Calendar Event List API
As suggested under the comments, you could be using timeMin and timeMax. It should be something similar to:
timeMin = 2022-12-27T15:30:00+01:00
timeMax = 2022-12-27T15:45:00+01:00
Notes:
Use the query parameters from above and make sure to respect that it must be RFC3339 timestamp.
This might be the only available option or suggestion when utilizing the event.list to filter them by the 15 minutes mark and check their status. It would be a loop process that could potentially hit a quota.
var request = Calendar.events.list({
'calendarId': calendar_id,
"singleEvents" : true,
"orderBy" : "startTime",
"timeMin": startDate.toISOString(),
"timeMax": maxDate.toISOString()
});
Calendar API limitation
If these suggestions or options are not enough or can be considered workarounds due to the limitations, you could always request a feature by going under Issue Tracker.
References
Events: list
Issue Tracker
We would like to have a line chart (or distribution) in our custom Stackdriver dashboard with the response time returned by the apache logs. Simple line chart, respone time or latency called by others coming from structured logs.
We setup our logging agent, added structured logs to fluentd as you can see on the image it's working on the log screen.
in the httpRequest we have the latency.
httpRequest: {
latency: "0.081215s"
referer: "-"
requestMethod: "GET"
requestUrl: "/v1/call/match-in?q=spdif&fields=&limit=200&ra..."
responseSize: "636"
serverIp: "3.89.69.139"
status: 200
userAgent: "HTTPClient/1.0 (2.8.3, ruby 2.2.3 (2015-08-18))"
}
We tried creating a custom log metric by picking the field, and setting the expression as ([0-9.]+)s to eliminate the ending 's'. Our values are in the 79ms 0.079800s not sure if we need to modify the bucket configuration.
But the chart is coming back empty:
How do we plot it as a line chart on a custom dashboard in Stackdriver?
Is there a way to plot an extracted time field from logs on the screen?
Update 1:
How can we actually check if we created a metric, that has too many time series? In the Troubleshooting section there are three cases, but how do we verify them?
Update 2:
So we found that this doesn't work with values below 1. Like we had chart where we had numbers sub second, and looks like while it's between 0 and 1 it doesn't plot.
The below image clearly shows that while the latency was below 1 it didn't plot on the distribution start, and once it was more than 1 it was plotting on the right chart.
So based on these finding what is the real work around? What me missed?
here is our config:
I'm getting a 501 response from the clockify API when trying to create a Time Entry using CreateTimeEntryRequest
I've verified I can query the API and get data from it, so I'm using the correct X-Api-Key, I've resolved a few issues with bad datetime formats, but I'm still getting the error.
URL I'm posting to:
https://api.clockify.me/api/workspaces/REMOVED/timeEntries/
My POST request header looks like this:
{"x-api-key": REMOVED, "Content-Type": "application/json"}
The body of the request is (For example):
{"start": "2019-01-28T14:53:04Z", "billable": false, "description": "Test Time Entry", "projectID": null, "taskID": null, "end": "2019-01-28T15:53:04Z", "tagIds": []}
I'm getting:
{"message": "Entity not created.", "code": 501}
And the time entry is not being created.
I expect some kind of success message
It has something to do with the "End" variable. If you remove it, it'll work. This of course means the timer will be running (also you will get a 400 error if you already have a timer running), so if you want to stop it, you'll have to immediately call PUT /workspaces/{workspaceId}/timeEntries/endStarted or if you want the stop time to be at some point in the past, you'll have to update the timer with PUT /workspaces/{workspaceId}/timeEntries/{id}. However the update doesn't seem to work either (same issue). My guess is they made a change to the endpoint (perhaps renamed the "end" variable), because I'm about 75% sure I used this API within the last month or so and it worked.
Hopefully someone from Clockify will see this and give an update. I had a similar issue happen with the "me" field in the GetSummaryReportRequest object. It stopped working and removing the field fixed it.
I've been attempting to log activity on a mobile-like device using the Google Analytics Measurement Protocol. All of these attempts have validated using the validation URL, and I can see activity when I look at the real-time reports on the Analytics website. But when I look at the Home or Overview reports for the day - no activity is shown.
The view is set for "All Mobile App Data".
The POST body looks something like this:
v=1&tid=UA-000000000-1&ds=app&qt=1601&uid=uid-zzzzz&t=screenview&cd=Foo&an=Foo%20App%20Name&aid=com.example.foo&aiid=com.example.foo&av=0.0.1&ua=Mozilla%2F5.0%20(Linux%3B%20Android%207.0%3B%20SM-G930V%20Build%2FNRD90M)%20AppleWebKit%2F537.36%20(KHTML%2C%20like%20Gecko)%20Chrome%2F59.0.3071.125%20Mobile%20Safari%2F537.36
The ua field is just a pre-defined string. I found that if I omitted it, the Real Time monitoring listed the hits as desktop hits, although I was in a Mobile report and the ds field was "app".
Am I missing a field that is required? Is there some reason why it is showing up in the real-time report, but not in a daily report? Is there some other way to diagnose why the data is vanishing, or confirm the data is actually being captured?
When i check the debug endpoint the hit is valid
Request:
https://www.google-analytics.com/debug/collect?v=1&tid=UA-XXX-1&ds=app&qt=1601&uid=uid-zzzzz&t=screenview&cd=Foo&an=Foo%20App%20Name&aid=com.example.foo&aiid=com.example.foo&av=0.0.1&ua=Mozilla%2F5.0%20(Linux%3B%20Android%207.0%3B%20SM-G930V%20Build%2FNRD90M)%20AppleWebKit%2F537.36%20(KHTML%2C%20like%20Gecko)%20Chrome%2F59.0.3071.125%20Mobile%20Safari%2F537.36
Response
{
"hitParsingResult": [ {
"valid": true,
"parserMessage": [ ],
"hit": "/debug/collect?v=1\u0026tid=UA-53766825-1\u0026ds=app\u0026qt=1601\u0026uid=uid-zzzzz\u0026t=screenview\u0026cd=Foo\u0026an=Foo%20App%20Name\u0026aid=com.example.foo\u0026aiid=com.example.foo\u0026av=0.0.1\u0026ua=Mozilla%2F5.0%20(Linux%3B%20Android%207.0%3B%20SM-G930V%20Build%2FNRD90M)%20AppleWebKit%2F537.36%20(KHTML%2C%20like%20Gecko)%20Chrome%2F59.0.3071.125%20Mobile%20Safari%2F537.36"
} ],
"parserMessage": [ {
"messageType": "INFO",
"description": "Found 1 hit in the request."
} ]
}
I cannot use one of the mobile libraries from Firebase - this is not one of the platforms they support. I do not wish to pretend this is a web page - there is no associated hostname or path. I do not wish to use Events since I can't do event Behavior Flow, which is one of the things I'm interested in seeing.
I'm aware that it can sometimes take "a day or so" for results to first appear. The site was setup over five days ago at this point, and has received data during that time.
Good thought about the anti-spam setting, however the setting appears to be correct:
I've also tried using GET instead of POST - no change, it still shows the hit in real-time, but then it vanishes.
However, I know that it can record hits permanently. There were two hits from a spammer in Russia that have shown up in the daily report (I wasn't there to see it show up in real-time). I don't know what they did, but would love to find out since it might help figure out how I can add a record.
In the real-time reports, it correctly points out the data center all the hits are coming from. Perhaps that is filtering it out somewhere out of my control?
Try adding Cid I know it says this is an optional parameter but for mobile accounts I belive it may be required.
Client ID
Optional.
This field is required if User ID (uid) is not specified in the request. This anonymously identifies a particular user, device, or browser instance. For the web, this is generally stored as a first-party cookie with a two-year expiration. For mobile apps, this is randomly generated for each particular instance of an application install. The value of this field should be a random UUID (version 4) as described in http://www.ietf.org/rfc/rfc4122.txt.
Example value: 35009a79-1a05-49d7-b876-2b884d0f825b
Although this says it needs to be a UUIDv4, it does work with other UUIDs (I've tested it with a v5, which is a hash against the value used for the uid parameter).
I'm working on a meteor mobile app that displays information about local places of interest and one of the things that I want to show is the weather in each location. I've currently got my locations stored with latlng coordinates and they're searchable by radius. I'd like to use the openweathermap api to pull in some useful 'current conditions' information so that when a user looks at an entry they can see basic weather data. Ideally I'd like to limit the number of outgoing requests to keep the pages snappy (and API requests down)
I'm wondering if I can create a server collection of weather data that I update regularly, server-side (hourly?) that my clients then query (perhaps using a mongo $near lookup?) - that way all of my data is being handled within meteor, rather than each client going out to grab the latest data from the API. I don't want to have to iterate through all of the locations in my list and do a separate call out to the api for each as I have approx. 400 locations(!). I'm afraid I'm new to API requests (and meteor itself) so apologies if this is a poorly phrased question.
I'm not entirely sure if this is doable, or if it's even the best approach - any advice (and links to any useful code snippets!) would be greatly appreciated!
EDIT / UPDATE!
OK I haven't managed to get this working yet but I some more useful details on the data!
If I make a request to the openweather API I can get data back for all of their locations (which I would like to add/update to a collection). I could then do regular lookup, instead of making a client request straight out to them every time a user looks at a location. The JSON data looks like this:
{
"message":"accurate",
"cod":"200",
"count":50,
"list":[
{
"id":2643076,
"name":"Marazion",
"coord":{
"lon":-5.47505,
"lat":50.125561
},
"main":{
"temp":292.15,
"pressure":1016,
"humidity":68,
"temp_min":292.15,
"temp_max":292.15
},
"dt":1403707800,
"wind":{
"speed":8.7,
"deg":110,
"gust":13.9
},
"sys":{
"country":""
},
"clouds":{
"all":75
},
"weather":[
{
"id":721,
"main":"Haze",
"description":"haze",
"icon":"50d"
}
]
}, ...
Ideally I'd like to build my own local 'weather' collection that I can search using mongo's $near (to keep outbound requests down, and speed up), but I don't know if this will be possible because the format that the data comes back in - I think I'd need to structure my location data like this in order to use a geo search:
"location": {
"type": "Point",
"coordinates": [-5.47505,50.125561]
}
My questions are:
How can I build that collection (I've seen this - could I do something similar and update existing entries in the collection on a regular basis?)
Does it just need to live on the server, or client too?
Do I need to manipulate the data in order to get a geo search to work?
Is this even the right way to approach it??
EDIT/UPDATE2
Is this question too long/much? It feels like it. Maybe I should split it out.
Yes easily possible. Because your question is so large I'll give you a high level explanation of what I think you need to do.
You need to create a collection where you're gonna save the weather data in.
A request worker that requests new data and updates the collection on a set interval. Use something like cron-tick for scheduling the interval.
Requesting data should only happen server side and I can recommend the request npm package for that.
Meteor.publish the weather collection and have the client subscribe to that, with optionally a filter for it's location.
You should now be getting the weather data on your client and should be able to get freaky with it.