Application Insights filter Data Out request - azure-application-insights

I have a chart in application insights and I want to see what request resulted in this massive data out of 64MB during a time period. It drained our application memory. Does anyone know how to filter the Application Insights data down to see what this was?

For start you can run the following query in Analytics in Application Insights. Since you have the time (UTC) by running this query you can filter the requests.
requests
| extend itemType = iif(itemType == 'request',itemType,"")
| where (itemType == 'request' and (timestamp >= datetime(2019-02-26T07:21:00.000Z) and timestamp <= datetime(2019-02-26T07:22:00.000Z)))

Related

Firebase Biq Query extension - Exceeded rate limits: too many api requests per user per method for this user_method

I have the firebase extension for streaming data to Big Query installed https://extensions.dev/extensions/firebase/firestore-bigquery-export.
Each month I run a job to import data into my Firestore collection in batches.
This month I imported 2706 rows but only 2646 made it into Big Query (60 less).
I am got the following errors from the extension:
[![enter image description here][1]][1]
Error: Exceeded rate limits: too many api requests per user per
method for this user_method. For more information, see
https://cloud.google.com/bigquery/docs/troubleshoot-quotas
Error: Process exited with code 16
at process.
I contacted Firebase support and they suggested I upgrade to the latest firebase admin and function packages but these have breaking changes. Updating the latest version of firebase-admin gave me errors. I have not got any more help from them and it is still happening for multiple collections.
The options I see are:
Update to the latest firebase-admin and firebase-functions packages
and change my code to work with the breaking changes. I think this is
unlikely to help.
Update the firebase extension to the latest version from 0.1.24 to
0.1.29 which now includes a flag called "Use new query syntax for snapshots" which can be turned on. I can't find much information
about this.
Increase the Big Query quota somehow.
Slow down the data being entered into Firestore or add it daily/weekly rather than monthly.
Here is my code in Nodejs:
firebase-admin: 9.12.0
firebase-functions: 3.24.1
firebase/firestore-bigquery-export#0.1.24
const platformFeesCollectionPath = `platformFees`;
const limit = 500;
let batch = db.batch();
let totalFeeCount = 0;
let counter = 0;
for (const af of applicationFees) {
const docRef = db.collection(platformFeesCollectionPath).doc();
batch.set(docRef, { ...af, dateCreated: getTimestamp(), dateModified: getTimestamp() })
counter++;
if (counter === limit || counter === applicationFees.length) {
await batch.commit();
console.log(`Platform fees batch run for ${counter} platform fees`);
batch = db.batch();
totalFeeCount = totalFeeCount + counter;
counter = 0;
}
}
if (applicationFees.length > limit) {
// Need this commit if there are multiple batches as the applicationFees.length does not work
await batch.commit();
totalFeeCount = totalFeeCount + counter;
}
if (counter > 0) {
console.log(`Platform fees batch run for ${totalFeeCount} platform fees`);
}
Update:
If I look in the GCP logs using the query:
protoPayload.status.code ="7"
protoPayload.status.message: ("Quota exceeded" OR "limit")```
I can see many of these errors:
[![Errors][2]][2]
[1]: https://i.stack.imgur.com/BAgTm.png
[2]: https://i.stack.imgur.com/eswzI.png
Edit:
Added issue to the repo:
github.com/firebase/extensions/issues/1394
Update:
It is still not working with v0.1.29 of the bigquery extension. I am getting the same errors.
Would it be possible to provide the BigQuery Extension version number for your current installation. This can be found on your Firebase console => Extensions tab.
The error "Exceeded rate limits: too many api requests", was an error we hoped to resolve with a release in June 2022. So perhaps be resolved with an upgrade, at the very least using the above example will provide the maintenance team a way of reproducing the bug.
In addition, if you would like to create an issue on the repository, it would be easier for maintainers to track this issue.
The error that you are facing is due to the Maximum number of API requests per second per user per method is exceeded.BigQuery returns this error when you hit the rate limit for the number of API requests to a BigQuery API per user per method. For more information, see the Maximum number of API requests per second per user per method rate limit in All BigQuery API.
To prevent this error you could try out the following:
Reduce the number of API requests or add a delay between multiple API
requests so that the number of requests stays under this limit.
The streaming inserts API has costs associated with it and has its
own set of limits and quotas.
To learn about the cost of streaming inserts, see BigQuery
pricing
You can request a quota increase by contacting support or sales. For
additional quota, see Request a quota increase. Requesting a
quota increase might take several days to process. To provide more
information for your request, we recommend that your request includes
the priority of the job, the user running the query, and the affected
method.
You can retry the operation after a few seconds. Use exponential
backoff between retry attempts. That is, increase the delay between
each retry.

429 Client Error: TooManyRequests for url

I have a script that executes an ingestion statement with a certain period. In simplified way it looks as follows:
import time
from azure.kusto.data import KustoClient, KustoConnectionStringBuilder, ClientRequestProperties
cluster = "https://<adxname>.centralus.kusto.windows.net"
client_id = "<sp_guid>"
client_secret = "<sp_secret>"
authority_id = "<tenant_guid>"
db = "db-name"
kcsb = KustoConnectionStringBuilder.with_aad_application_key_authentication(
cluster, client_id, client_secret, authority_id)
client = KustoClient(kcsb)
query = """
.append my_table <|
another_table | where ... | summarize ... | project ...
"""
while True:
client.execute(db, query)
time.sleep(30.0)
So it executes a small query every 30 seconds. The query takes only milliseconds to complete. Lib version: azure-kusto-data==3.1.0.
It works fine for a while, but after some time it starts failing with this error:
requests.exceptions.HTTPError: 429 Client Error: TooManyRequests for
url:
https://adxname.centralus.kusto.windows.net/v1/rest/mgmt
azure.kusto.data.exceptions.KustoApiError: The control command was
aborted due to throttling. Retrying after some backoff might succeed.
CommandType: 'TableAppend', Capacity: 1, Origin:
'CapacityPolicy/Ingestion'.
Looking at the CapacityPolicy/Ingestion mentioned in the error, I cannot see how it can be relevant. This policy left as default:
.show cluster policy capacity
"Policy": {
"IngestionCapacity": {
"ClusterMaximumConcurrentOperations": 512,
"CoreUtilizationCoefficient": 0.75
},
...
}
I do not quite understand how it can be related to concurrent operations or core utilization as ingestion is fast and rarely executed.
How to troubleshoot the issue?
Assuming that you have a monitoring or admin permission on the database you can run the following to see the ingestion activity for any given time period (ensure that you have all types in the "in" clause), for example:
.show commands
| where StartedOn > ago(1h)
| where CommandType in ("DataIngestPull", "TableAppend", "TableSetOrReplace", "TableSetOrAppend")
| summarize count() by CommandType
As a side note, for this type of operation, you should consider using materialized views.
According to the error message, the ingestion capacity for your cluster is 1. This likely indicates you're using the dev SKU that has a single node with 2 cores.
With such a setup, only a single ingestion operation can run at a given time. Any additional concurrent ingestions will be throttled.
You can either implement tighter control over the client(s) ingesting into the cluster, so that no more than a single ingestion command attempts to run concurrently, and the calling code can recover from throttling errors; or scale the cluster up/out - by adding more nodes/cores you'll be increasing the ingestion capacity.
You can also verify who/what else is ingesting into your cluster by using .show commands

Logs of Azure Durable Function in Application Insights

I see the following logs in Application Insights after running an Azure Durable Function:
Does 'Response time' indicate the execution time of each function? If so, is there a way to run a kusto query to return the Response time and name of each function?
Yes, Response Time is the time taken to complete execution
or
Response Time = Latency + Processing time.
You can use the below kql query to pull the function name and response time
requests
| project timestamp,functionName=name,FuncexecutionTime=parse_json(customDimensions).FunctionExecutionTimeMs,operation_Id,functionappName=cloud_RoleName

What's the difference between transfer-response and forward-request errors in API management?

A large number requests over our Azure API Management result in the ClientConnectionFailure exception.
By querying the logs I see two variants of the error:
exceptions
| where cloud_RoleName == "..."
| summarize num = count(itemCount) by problemId, outerMessage
| order by num
problemId: ClientConnectionFailure at transfer-response, outermessage: A task was canceled, count 403,249
problemId: ClientConnectionFailure at forward-request, outermessage: The operation was canceled, count 55,531
Based on this post, the problem could be time-outs or that clients abandon connections. With response times generally within 500ms I'm inclined to rule out the first.
The question is: what is the difference between transfer-response and forward-request, and does it provide any clues as to what is going on?
Transfer-response means that the client dropped the connection after it started receiving the response.
Forward-request means that the client dropped the connection while the APIM gateway was sending the request to the back end or waiting for a response from the back end.

How does Google Analytics calculate 10000 requests per Profile?

I am Fetching Data from Google Analytics For Metrics (Pageviews,Unique pageviews, TimeonPAge, Exits) as below
DataResource.GaResource.GetRequest r = GAS.Data.Ga.Get(profileID,startdate.ToString("yyyy-MM-dd"),enddate.ToString("yyyy-MM-dd"),"ga:pageviews,ga:uniquePageviews,ga:timeOnPage,ga:exits");
r.Dimensions = "ga:pagePath";
r.Filters = "ga:pagePath=~ItemID=" + strPagePath + "*";
r.MaxResults = 1000;
GaData d = r.Fetch();`
then I received the following exception after fetching data(Metrics) for some random number of videos:
>>Error while fetching pageviews From GA Google.Apis.Requests.RequestError
>>Quota Error: profileId ga:****** has exceeded the daily request limit. [403]
>>Errors [
>> Message[Quota Error: profileId ga:****** has exceeded the daily request >>>limit.] Location[ - ] Reason[dailyLimitExceeded] Domain[global]
>>]
I am fetching these four metrics( page views, unique views.. so on) for one ItemID.
Does Google Analytics calculate it as 4 different Requests or one single request??
Each request you send against the Google analytics API counts as one. The Quota is not project or user based.
Pagination:
Your request above you are requesting maxResults of 1000 if the total number of rows in your request is 100000 then you are going to have to make 100 requests to get all of the data.
All APIs:
Requests to all of the APIs count against the same quota so if you are also using the management api it counts as well as the reporting api.
All Users and applications:
Now here is the fun part about the current quota system it is not project related.
Lets say my company has a profile 1234567. Now our marketing team all has access. Each member of the marketing team likes different apps. They all install the app they like best. They are all using the same 10000 request quota.
Reset:
Your quota will reset at midnight west cost USA time. No one will be able to access that view id until then. Top tip when testing create a development view under the web property to request from then you wont blow out your production view.

Resources