In Azure you can see how many requests have been made against a CosmosDB in the overview tab of the cosmosDb. I want to get that same number (total requests) using Log Analytics Diagnostic logs, but I am having trouble knowing which logs to count, since there are more logs than total requests.
Around March I used a logic that if a log had a full self-link (with database id/name and collection id/name) in requestResourceId_s field then I would count it. This seemed to work and the numbers added up, but when I revisited this a while back I noticed this doesn't work anymore. Then I tried filtering the logs with collectionName_s != "", requestLength_s != "0", requestCharge_s != "0.000000", using the distinct operator on the activityId and combining the mentioned filters in different combinations. But it always returns the wrong numbers and I cant seem to find the Total Request Count.
AzureDiagnostics
| extend requestDatabaseId = extract("(^(/dbs/.*?)/)", 1, requestResourceId_s)
| extend requestCollectionId = extract("((/colls/.*?)/)", 1, requestResourceId_s)
| where requestDatabaseId != "" and requestCollectionId != ""
This is the main point of the query I used to use to get the Total request count. For instance, it will count a log that has /dbs/master/colls/master/docs" in requestResourceId_s
For instance, if I see there have been 97 total request, with my old logic there are now 326 logs that get counted.
Any help is appreciated.
Welcome to Stack Overflow.
AFAIK you should use the below Kusto query to get the total number of requests made.
AzureMetrics | where MetricName == "TotalRequests"
Pre-requisite for the above thing to work is to turn on logging using diagnostic setting as explained in this document. Make sure you tick the 'Requests' box under Metric section as highlighted in below screenshot.
Please refer to this document to know what all metrics are currently supported. Should supported metrics for any Azure resource changes in future then the information on this would probably be updated.
Hope this helps!! Cheers!!
Related
I am currently experimenting with the Azure TSI Gen2 ApiVersion = "2020-07-31" and I am wondering the effect of the GetSeries and GetEvents endpoints Take-parameter.
If I query TSI data for a wide searchSpan that contains more events than I define on my Take-parameter, what happens? Is the data returned in some order? What is the expected form of the response data?
Documentation definition for the Take
take - integer - Maximum number of property values in the whole response
set, not the maximum number of property values per page. Defaults to
10,000 when not set. Maximum value of take can be 250,000.
The take parameter specifies the number of events returned by the query (across pages). So if your search span has more events than your "take", TSI randomly selects, or 'takes', that set of records from storage. E.g. if you have 20k events in your search span, and a take of 10k, you'd get a random 10k events from the 20k in that timeframe.
In the TSI explorer, when you "Explore Events" to see the raw data, the explorer calls GetEvents. TSI explorer will always try to show the max (250k) events and will notify you if there more than 250k in the search span.
Data isn't returned in any order by the APIs. Adding sorting capabilities is something we have on our roadmap. Here's a feedback item where you can upvote the request to add this functionality, as well.
Here's some examples showing the request/response of GetEvents and other APIs.
As documented here: https://learn.microsoft.com/en-us/azure/time-series-insights/concepts-query-overview#time-series-query-tsq-apis
Get Events and Get Series API supports pagination to retrieve the complete response dataset for the selected input.
Hi I want to monitor postgres database using ODBC and to show notification based on condition, I'm creating item with db.odbc.get[,{$DSN_NAME}], please find the screen shot my item configuration.
I can be able to get data, please find the below screen of receiving data
Now I want to process this data and to show notification for user that these jobs are failed if status equals 8, I have tried it with trigger, but I can't get rid of it.
please find the screen shot for trigger configuration and also error that has been occured
The following error is occured
Can any one help me on this, and also please correct me if my approach is wrong, since I'm very new for this.
I'm also trying with low level discovery, but I don't exact way of doing it,
I have tried below where I'm facing the following issue that
Cannot create item: item with the same key "db.odbc.select[testing_odbc {#job_name},{$DSN_NAME}]" already exists.
.
Find the screen shot of discovery rule below
Then I'm creating item prototype as below
please find the sample data from discovery rule
{
"data":[{"job_name":"job1","job_status":1},{"job_name":"job2","job_status":0},{"job_name":"job3","job_status":2}]
}
I'm scheduling the discovery rule for every 20 seconds and item prototype for every 30 seconds, and I guess for every 20 seconds it's trying to create item with same id as before.
How to resolve and for the sql query in item prototype what need to give.
That JSON text is not a number, so you can't compare it to a number.
Options:
change your query to return a number.
Use JSONPath preprocessing to select the number from the JSON (ie: $[0]["Status"])
I'm trying to retrieve separately ga:userAgeBracket and ga:userGender using Google Analytics Report API v4 using a filter on eventCategory and eventAction.
From GA Dashboard, i'm able to retrieve the data even if there is only ~ 2.4k users and ~5.6k sessions. The repartition is 178 Males and 142 Females.
I'm trying to get the same result with the API but it's return nothing. I'm testing with https://ga-dev-tools.appspot.com/query-explorer with the same filters ect.
Is there any limit on the API ONLY when there is a small amount of data ? or another reason ?
EDIT: With another account with more data, and i still have the issue. Here are some screenshots
You created a segment in GA webUI and is comparing the results to "filters" in the explorer. You should be applying the same segment instead for consistency. Once you save the segment in webUI, it should be available in the "segment" field as a selection.
Your filtering also uses the "=~" regex match operator instead of the "=#" contains operator. Try this and you should have results.
I am trying to replicate Firebase Cohorts using BigQuery. I tried the query from this post: Firebase exported to BigQuery: retention cohorts query, but the results I get don't make much sense.
I manage to get the users for period_lag 0 similar to what I can see in Firebase, however, the rest of the numbers don't look right:
Results:
There is one of the period_lag missing (only see 0,1 and 3 -> no 2) and the user counts for each lag period don't look right either! I would expect to see something like that:
Firebase Cohort:
I'm pretty sure that the issue is in how I replaced the parameters in the original query with those from Firebase. Here are the bits that I have updated in the original query:
#standardSQL
WITH activities AS (
SELECT answers.user_dim.app_info.app_instance_id AS id,
FORMAT_DATE('%Y-%m', DATE(TIMESTAMP_MICROS(answers.user_dim.first_open_timestamp_micros))) AS period
FROM `dataset.app_events_*` AS answers
JOIN `dataset.app_events_*` AS questions
ON questions.user_dim.app_info.app_instance_id = answers.user_dim.app_info.app_instance_id
-- WHERE CONCAT('|', questions.tags, '|') LIKE '%|google-bigquery|%'
(...)
WHERE cohorts_size.cohort >= FORMAT_DATE('%Y-%m', DATE('2017-11-01'))
ORDER BY cohort, period_lag, period_label
So I'm using user_dim.first_open_timestamp_micros instead of create_date and user_dim.app_info.app_instance_id instead of id and parent_id. Any idea what I'm doing wrong?
I think there is a misunderstanding in the concept of how and which data to retrieve into the activities table. Let me state the differences between the case presented in the other StackOverflow question you linked, and the case you are trying to reproduce:
In the other question, answers.creation_date refers to a date value that is not fix, and can have different values for a single user. I mean, the same user can post two different answers in two different dates, that way, you will end up with two activities entries like: {[ID:user1, date:2018-01],[ID:user1, date:2018-02],[ID:user2, date:2018-01]}.
In your question, the use of answers.user_dim.first_open_timestamp_micros refers to a date value that is fixed in the past, because as stated in the documentation, that variable refers to The time (in microseconds) at which the user first opened the app. That value is unique, and therefore, for each user you will only have one activities entry, like:{[ID:user1, date:2018-01],[ID:user2, date:2018-02],[ID:user3, date:2018-01]}.
I think that is the reason why you are not getting information about the lagged retention of users, because you are not recording each time a user accesses the application, but only the first time they did.
Instead of using answers.user_dim.first_open_timestamp_micros, you should look for another value from the ones available in the documentation link I shared before, possibly event_dim.date or event_dim.timestamp_micros, although you will have to take into account that these fields refer to an event and not to a user, so you should do some pre-processing first. For testing purposes, you can use some of the publicly available BigQuery exports for Firebase.
Finally, as a side note, it is pointless to JOIN a table with itself, so regarding your edited Standard SQL query, it should better be:
#standardSQL
WITH activities AS (
SELECT answers.user_dim.app_info.app_instance_id AS id,
FORMAT_DATE('%Y-%m', DATE(TIMESTAMP_MICROS(answers.user_dim.first_open_timestamp_micros))) AS period
FROM `dataset.app_events_*` AS answers
GROUP BY id, period
In my Devstack setup there was a issue in displaying details in the Rating section.
Pricing was configured correctly, During Instance creation Rate is displayed in the instance creation window.
But after creation of instance I am checking the Rating section for rates or cost.
It was not displaying the value as needed.
I checked the DB table (rated_data_frames) in Cloudkitty.
It doesn't have the necessary values immediately.
I was continuously checking for some hours consecutively.
But I can be able to see that Cloudkitty DB is getting updated with the values after some hours from instance creation.
That is after some hours, it is getting added in table regarding the Instance created.
So that in Front-end also it got displayed.
I want to know why it is happening.
Is there any solution for the same to get the results immediately.
Simply I need to get the results immediately in rating section.
I can be able to see that in cloudkitty.conf file section is there as follows:
# Rating period in seconds. (integer value)
#period = 3600
#wait_periods = 2
If changing this will help us.?