Capture all the telemetry to azure application insights - azure-application-insights

I want to capture all the requests in application insights. The below example given by Microsoft is confusing, didn't mention whether 'N' is a requests to capture or the total requests. Should I give 100 percent or 1 percent to capture all of them?
https://learn.microsoft.com/en-us/azure/azure-monitor/app/sampling
<Add Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.SamplingTelemetryProcessor, Microsoft.AI.ServerTelemetryChannel">
<!-- Set a percentage close to 100/N where N is an integer. -->
<!-- E.g. 50 (=100/2), 33.33 (=100/3), 25 (=100/4), 20, 1 (=100/100), 0.1 (=100/1000) -->
<SamplingPercentage>100</SamplingPercentage>
</Add>

Thank you Tiny Wang. Posting your comment discussion as answer to help other community members.
Setting SamplingPercentage 100 means, it capture 100% requests (all
the requests are captured). Setting it to 10 means, it captures 10
requests if there are 100 request in total.

Related

How to analyze jmeter TPS graph

I made a TPS graph using jmeter, but I can't interpret what it means. Please tell me how to interpret the graph
request of type get named "A"
threads: 1000
ramp-up period: 100
loop Count: 1
Active Threads Over Time
1. image
Response Times Over Time
2. image
Transactions per Second
3. image
Your graphs mean that:
You run 1 thread (virtual user)
You can reach 10 transactions per second with this 1 thread
Average response time is around 100 milliseconds
You might also be interested in:
Apache JMeter HTML Reporting Dashboard - an alternative and automated way of generating pretty zoomable charts
Apache JMeter Glossary - so you would understand what do your metrics mean in general
Performance Metrics for Websites - so you would understand how to correlate the metrics and what are possible relationships between them

HERE API rate limit header explanation

I am using the route matching HERE api.
At some point due to the large number of requests I receive a 429 error with the following headers.
X-Ratelimit-Limit:[250, 250;w=10]
X-Ratelimit-Reset:[6]
Retry-After:[6]
These are the only rate limiting related headers I receive.
I would like an explanation of the X-Ratelimit-Limit:[250, 250;w=10] header.
What does the 250 and w=10 mean?
The first number is the number of requests that you have made for the given API in the time frame.
The second section refers to the quota policy.
An example policy of 100 quota-units per minute.
100;window=60
For the current example it specifies 250 requests every 10 seconds
More details at : RFC for rate limit header

How do I remedy the Pagespeed Insights message "pages served from this origin does not pass the Core Web Vitals assessment"?

In Pagespeed insights, I get the following message in Origin Summary: "Over the previous 28-day collection period, the aggregate experience of all pages served from this origin does not pass the Core Web Vitals assessment."
screenshot of the message in PageSpeed Insights
Does anyone know what % of URLs have to pass the test in order to change this? Or what the criteria is?
Explanation
Lets use Largest Contentful Paint (LCP) as an example.
Firstly, the pass / fail is not based on the percentage of URLs, it is based on the average time / score.
This is an important distinction as you could have 50% of the data fail, but if it only fails by 0.1s (2.6s) and the other 50% of data is passing by 1 second (1.5s) the average will be a pass (average of 2.05s which is a pass).
Obviously this is an over-simplified example but you hopefully get the idea that you could have 50% of your site in the red and still pass in theory, which is why the percentages in each category are more for diagnostics.
If the average time for LCP across all pages in the CrUX dataset is less than 2.5 seconds ("Good") then you will get a green score and that is a pass.
If the time is less than 4 seconds the score will be orange ("Needs improvement") but this will still count as a fail.
Over 4 seconds and it fails and will be red ("Poor").
Passing criteria
So you need the following to be true to pass the web vitals (at time of writing):-
Largest Contentful Paint (LCP) average is less than 2.5 seconds
First Input Delay (FID) is less than 100ms
Cumulative Layout Shift is less than 0.1
If any one of those is over the threshold you will fail, even if the other two are within the green / passes.
FID - when running lighthouse (or Page Speed Insights) on a page you do not get the FID as part of the synthetic test (Lab Data).
Instead you get Total Blocking Time (TBT) - this is a close enough approximation for FID in most circumstances so use that (or run a performance trace).

How to handle throttling when using the Azure Translator Text API?

When I send too many request to the Azure Translator Text API I sometimes receive 429 Responses from the API without indication how to properly throttle the request count. I have found some documentation about throttling but it doesn't seem to apply to this specific API: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-request-limits
Does anybody know if there is a similar way to get the remaining request count or the time to wait before another request should be made? Or do I have to implement my own logic to handle throttling?
Azure Translator Text API is bit specific because the limit announced is not around the number of requests but the number of characters.
As mentioned in the documentation here, the limit depends on the type of key:
Tier / Character limit
F0: 2 million characters per hour
S1: 40 million characters per hour
S2: 40 million characters per hour
S3: 120 million characters per hour
S4: 200 million characters per hour
And I guess that there is also a (more technical) requests limit, not clearly indicated in the documentation
To be clear here the limit of microsoft translator for free tier (F0)
2,000,000 million characters per hour/month
33,300 characters per minute
10,000 characters per second/request
limit reset 60 seconds after blocked.

VS2008 Load Testing - Page Response Time

I am running a load test from VS 2008 on my asp.net web application. The thing I notice is that for some of my pages Average Page Time is around 20.
Does this mean it takes 20 seconds for the server to render the page before it sends the request? Or is it simply 20 seconds until the whole page is fully loaded on the client's browser?
Does this statistic take Network Type into an account; so say that I change from 52kbps to 1.5mbps, is this statistic supposed to change?
Another thing is - my Average Response Time is 0.21, whilst some pages have Average Page Time at 20. Why is it so different? What does each mean?
Thank you.
Average Page Time usually just includes the time to receive all of the bytes over the network. So yeah, maybe this will change on a different bandwidth.
EDIT: As for your second question, Average Response Time is just the statistic for ALL requests that are filed during the duration of the test.

Resources