Sampling of telemetry over REST API of Applicaiton Insights - azure-application-insights

From a Java application we have made a logger which utilized the REST API for application insights on Azure.
But we are experiencing sampling of the telemetry we send over the wire. So my question is, is it possible to configure sampling when using the REST API ?
Thanks

You can do sampling in App Insights, which is explained here.
There are three varieties of sampling methods offered by Azure App Insights:
Adaptive sampling automatically adjusts the volume of telemetry sent from the SDK in your ASP.NET/ASP.NET Core app, and from Azure Functions. This is the default sampling when you use the ASP.NET or ASP.NET Core SDK. Adaptive sampling is currently only available for ASP.NET server-side telemetry, and for Azure Functions.
Fixed-rate sampling reduces the volume of telemetry sent from both your ASP.NET or ASP.NET Core or Java server and from your users' browsers. You set the rate. The client and server will synchronize their sampling so that, in Search, you can navigate between related page views and requests.
Ingestion sampling happens at the Application Insights service endpoint. It discards some of the telemetry that arrives from your app, at a sampling rate that you set. It doesn't reduce telemetry traffic sent from your app, but helps you keep within your monthly quota. The main advantage of ingestion sampling is that you can set the sampling rate without redeploying your app. Ingestion sampling works uniformly for all servers and clients, but it does not apply when any other types of sampling are in operation.

Related

Application Insights shows GET calls while my code does not have any such calls being made

We have integrated Azure Application insights with our bot built using Azure bot framework using node.JS and typescript. Everything looks fine and we can see telemetry data flowing in.
In the failures section, we can see Operation name "GET /api/messages" showing repeated times - one failed call (405) and one success call (200).
But we have no GET operation being done on "/api/messages" in our code. We only have "POST" operations.
We are unable to understand why telemetry shows GET operation and one as failed and one as success.
Any help is appreciated.
The operation_SyntheticSource field of request telemetry is often used by microsoft / azure to indicate traffic that is generated by infrastructure or bots. Examples are health requests, keep alive traffic, spider bots.
There are options to filter out telemetry, so it is possible to filter out telemetry cause by synthetic traffic. See the docs.
Telemetry processors can be configured using DI.

Audit logging CosmosDB

Wanting to validate my ARM template was deployed ok and to get an understanding of the telemetry options...
Under what circumstances do the following get logged to Log Analytics?
DataPlaneRequests
MongoRequests
QueryRuntimeStatistics
Metrics
From what I can tell arduously in the last few days connecting in different ways.
DataPlaneRequests are logged for:
SQL API calls
Table API calls even when the account was setup for SQL API
Graph API calls against an account setup for Graph API
Table API calls against an account setup for Table API
MongoRequests are logged for:
Mongo requests even when the account was setup for SQL API
However I haven't been able to see anything for QueryRuntimeStastics (even when turning on PopulateQueryMetrics) nor have I seen any AzureMetrics appear?
Thanks Alex for spending time and trying out different options of logging for Azure Cosmos DB.
There are primarily two types of monitoring paths for Azure Cosmos DB.
Metrics: These are low latency (<5 min) and aggregated metrics which are exposed on Azure Monitor API for consumption. THese metrics are primarily used for diagnosis of the app for any live site issues.
Logs: These are raw request logs coming at 2hours+ latency and are used for customer for primarily audit scenarios to understand who accessed the data.
Depending on your need you can choose either of the approaches.
DataPlaneRequests by default shows all the requests across all the API's and Mongo Requests only show Mongo specific calls. Please note Mongo requests would also be seen in Data Plane requests.
Metrics would not be see in Log Analytics due to a knowwn which our partner team is fixing.
Let me know if you have any further questions here.

Can we set quota limit on cognitive services API calls ?

I was just wondering whether it is possible to set limit on number calls in Cognitive Services - this is in context with - keeping API keys in the app
There currently is not a way to set limits or caps for your quota at this time. If you are subscribed to a paid subscription through Azure you are able to monitor your monthly usage.
This information appears when you are viewing the resource item (your subscription) in Azure.

Load testing a notification hub

I have a requirement to deliver push notifications to an app that runs on iOS and Android, with approximately 2 million installations in total. I've built a PoC using Azure Notification Hubs. This works fine tested against a handful of phones / tablets I could borrow. I've also tried the same with Amazon's SNS and that worked well too.
I have no reason to believe that hubs wouldn't scale as I need it to but I wondered if there was any provision for load testing. I can't borrow 2m phones but maybe I could configure a hub to call a service I host, thereby simulating a push to either the GCM or APNS gateways? This would help build confidence in my end-to-end performance / volume testing.
I believe this is not supported. If there is a load testing capability, it's internal to Azure and not offered for public use.
However, Microsoft does provide an SLA for the Basic and Standard tiers of Notification Hubs. They claim they use the Notification Hubs service to deliver things like the Breaking News alerts for the Bing News apps. The SLA guarantees 99.9% successful message delivery within five minutes (over a month).
The Service Bus SLA (which covers Notification Hubs) is here: http://www.microsoft.com/en-us/download/details.aspx?id=4767
I could not find SLAs for GCM or APNS.
Notification Hubs do provide a fairly rich reporting API, that you can query with OData filters to determine how many notifications are sent over given periods of time.
But I expect that the variable load conditions that affect the service as a whole will mean that no specific promise is made about the specific timeliness of any delivery (within the five-minute guaranteed delivery time). In other words, all of your 2 million notifications might be delivered within fifteen seconds, or it might take 4 minutes to send the first message, with all of them delivered at 4.9 minutes, depending on who else is using the service and how heavily they are using it.

Linkedin API throttle limit

Recently I was developing an application using Linkedin people-search API. Documentation says that a developer registration has 1 lac API calls per day, but when I have registered this API, and ran a python script, after some 300 calls it says throttle limit exceeds.
Did anyone face such kind of issue using Linkedin API, comments are appreciated.
Thanks in advance.
It's been a while but the stats suggest people still look at this and I'm experimenting with the LinkedIn API and can provide some more detail.
The typical throttles are stated as both a max (e.g. 100K) and a per-user-token number (e.g. 500). Those numbers together mean you can get up to a maximum of 100,000 calls per day to the API but even as a developer a single user token means a maximum of 500 per day.
I ran into this, and after setting up a barebones app and getting some users I can confirm a daily throttle of several thousands of API calls. [Deleted discussion of what was probably, upon further consideration, an accidental back door in the LinkedIn API.]
As per the Throttle Limits published by LinkedIn:
LinkedIn API keys are throttled by default. The throttles are designed
to ensure maximum performance for all developers and to protect the
user experience of all users on LinkedIn.
There are three types of throttles applied to all API keys:
Application throttles: These throttles limit the number of each API call your application can make using its API key.
User throttles: These throttles limit the number of calls for any individual user of your application. User-level throttles serve
several purposes, but in general are implemented where there is a
significant potential impact to the user experience for LinkedIn
users.
Developer throttles: For people listed as developers on their API keys, they will see user throttles that are approximately four times
higher than the user throttles for most calls. This gives you extra
capacity to build and test your application. Be aware that the
developer throttles give you higher throttle limits as a developer of
your application. But your users will experience the User throttle
limits, which are lower. Take care to make sure that your application
functions correctly with the User throttle limits, not just for the
throttle limits for your usage as a developer.
Note: To view current API usage of your application and to ensure you haven't hit any throttle limits, visit
https://www.linkedin.com/developer/apps and click on "Usage & Limits".
The throttle limit for individual users of People Search is 100, with 400 being the limit for the person that is associated with the Application as the developer:
https://developer.linkedin.com/documents/throttle-limits
When you run into a limit, view the api usage for the application on the application page to see which throttle you are hitting.

Resources