How in azure account increase number request limit on read when 429 error - azure-resource-manager

when i send more than 60 requests on site, i get 429 error
X-RateLimit-Limit
60
X-RateLimit-Remaining
59
11:28 AM
x-ratelimit-limit
60
x-ratelimit-remaining
0
x-ratelimit-reset
1594787767
429
Too Many Requests
there php service app, apache server

The short answer is to not call the API so quickly. You are being throttled because you are calling too frequently.
The longer answer is, it depends. What API are you calling? Every service in Azure has different rate limits for different activities. I am not aware of one that throttles at a limit of 60, but if you can provide more details then we can provide more info. You can also check https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits for all of the documented limits and see if you are hitting a hard limit or a soft limit.
If it is a soft limit, then you can usually open a support incident and have the limit raised (there are some limitations such as capacity, scenario, etc).
If it is a hard limit then you will need to adjust your calling pattern.
If it is a rate limit on one of your own resources (ie. calling a storage account), then you can possibly resize your resource to a more expensive SKU to increase those limits.

Related

Firebase Cloud Messaging: "Topic Quota Exceeded"

I have a webapp and a Windows Service which communicate using Firebase Cloud Messaging. The webapp subscribes to a couple of Topics to receive messages, and Windows Service App sends messages to one of these Topics. In some cases it can be several messages per seconds, and it gives me this error:
FirebaseAdmin.Messaging.FirebaseMessagingException: Topic quota exceeded
I don't quite get it. Is there a limit to messages that can be sent to a specific topic, or what is the meaning?
I have found until now only info about topic names and subscription limits, but I actually couldn't find anything about "topic quota", except maybe this page of the docs (https://firebase.google.com/docs/cloud-messaging/concept-options#fanout_throttling) although I am not sure it refers to the same thing, and in case if and how it can be changed. In the Firebase Console I can't find anything either. Has anybody got an idea?
Well.. from this document it seems pretty clear that this can happen:
The frequency of new subscriptions is rate-limited per project. If you
send too many subscription requests in a short period of time, FCM
servers will respond with a 429 RESOURCE_EXHAUSTED ("quota exceeded")
response. Retry with exponential backoff.
I do agree that the document should've state how much quantity will trigger the block mechanism instead of just telling the developer to "Retry with exponential backoff". But, at the end of the day, Google also produced this document to help developers understand how to properly implement this mechanism. In a nutshell:
If the request fails, wait 1 + random_number_milliseconds seconds and
retry the request.
If the request fails, wait 2 + random_number_milliseconds seconds and
retry the request.
If the request fails, wait 4 + random_number_milliseconds seconds and
retry the request.
And so on, up to a maximum_backoff time.
My conclusion: reduce the amount of messages send to topic OR implement a retry mechanism to recover unsuccessful attempts
It could be one of these issue :
1. Too high subscriptions rates
Like noted here
The frequency of new subscriptions is rate-limited per project. If you send too many subscription requests in a short period of time, FCM servers will respond with a 429 RESOURCE_EXHAUSTED ("quota exceeded") response. Retry with exponential backoff.
But this don't seem to be your problem as you don't open new subscriptions, but instead send messages at high rate.
2. Too many messages sent to on device
Like noted here
Maximum message rate to a single device
For Android, you can send up to 240 messages/minute and 5,000 messages/hour to a single device. This high threshold is meant to allow for short term bursts of traffic, such as when users are interacting rapidly over chat. This limit prevents errors in sending logic from inadvertently draining the battery on a device.
For iOS, we return an error when the rate exceeds APNs limits.
Caution: Do not routinely send messages near this maximum rate. This
could waste end users’ resources, and your app may be marked as
abusive.
Final notes
Fanout throttling don't seems to be the issue here, as the rate limit is really high.
Best way to fix your issue would be :
Lower your rates, control the number of "devices" notified and overall limit your usage over short period of time
Keep you rates as is but implement a back-off retries policy in your Windows Service App
Maybe look into a service mor suited for your usage (as FCM is strongly focused on end-client notification) like PubSub

Google Earth Engine quota policy

In the GEE documentation here there is limited information about the quota limits. Basically all it tells us is that there are separate limits for concurrent computation vs tile requests. I am hitting the 429 Too Many Requests often for computation requests.
In order to properly throttle my requests or add a queueing system then I would need to know details about the quota policy e.g. "the quota is X concurrent computations", "there's a rate limit of Y requests within a Z minute window".
Does anyone have knowledge of the actual quota policy?
Earth Engine's request quota limits are a fairly complex topic (I know because I work on them) and there are not currently any documented guarantees about what is available. I recommend that you implement automatic backoff that adapts to observed 429s rather than attempting to hardcode a precisely matching policy.
Also note that fetching data in lots of small pieces is not the best use of the Earth Engine API — as much as possible, you should let Earth Engine do the computation, reducing large data sets into just the answers you actually need. This will reduce your need to worry about QPS as opposed to concurrency, and reduce the total amount of request processing and computation startup overhead.

Firebase - Dynamic Links Limit

I've been trying to do some research into the use of Dynamic Links with Firebase and was curious if anyone knew of any limit on the number of dynamic links we can use. For example, we're looking to generate links server side which is dynamic and will log users into our apps. However, this will go into the 10,000s. so will this be a bit too extreme for Firebase and therefore not a viable solution?
Thanks in advance.
That should be fine - the limits on FDL will be in terms of how many you can create per second, so as long as you spread the creation out, you should be fine.
Based on my (short) experience, if using the free plan there is a limit of 100 links per 100 seconds per user, which means if you generate links on the backend (like I do) you are basically limited to creating 1 link per second, which is not much. If you exceed this limit you receive an error like this:
429 Too Many Requests
Insufficient tokens for quota 'DefaultQuotaGroup' and limit 'USER-100s' of service
Also a lot of times the Dynamic Links API returns error 503 Service Unavailable when generating links instead of 429, I don't know why but I don't think they have availability issues. It's just kind of confusing.
Here is what I see on my project's quotas page:
Default quota. per day 100,000
Default quota. per 100 seconds 5,000
Default quota. per 100 seconds per user 100

Overcome Marketo's quota limits

As far as I know, Marketo limits the number of REST API requests to 10,000 per day. Is there a way to overcome this limit? Can I pay and get more of those?
I found out that the REST API requests and the SOAP API requests counts separately but I'm trying to find a solution that is limited to REST API.
Moreover, in order to get an access token I need to sacrifice a request. I need to know how long this access token will be alive in order to save as much requests as possible.
You can increase your limit just by asking your account manager. It costs about 15K per year to increase your limit by 10K API calls.
Here are the default limits in case you don't have them yet:
Default Daily API Quota: 10,000 API calls (counter resets daily at 12:00 AM CST)
Rate Limit: 100 API calls in a 20 second window
Documentation: REST API
You'll want to ask your Marketo account manager about this.
I thought I would update this with some more information since I get this question a lot:
http://developers.marketo.com/rest-api/
Daily Quota: Most subscriptions are allocated 10,000 API calls per day (which resets daily at 12:00AM CST).  You can increase your daily quota through your account manager.
Rate Limit: API access per instance limited to 100 calls per 20 seconds.
Concurrency Limit:  Maximum of 10 concurrent API calls.
For the Daily limit:
Option 1: Call your account manager. This will cost you $'s. For a client I work for we have negotiated a much higher limit.
Option 2: Store and Batch your records. For example, you can send a batch of 300 leads in a single lead insert/update call. Which means you can insert/update 3,000,000 leads per day.
For the Rate limit:
Option 1 will probably not work. Your account manager will be reluctant to change this unless you a very large company.
Option 2: You need to add some governance to your code. There are several ways to do this, including queues, timers with a counter, etc. If you make multi-threaded calls, you will need to take into account concurrency etc.
Concurrent call limit:
You have to limit your concurrent threads to 10.
There are multiple ways to handle API Quota limits.
If you all together want to avoid hitting API limit, try to achieve your functionality thru Marketo Webhooks. Marketo webhook will not have API limits, but it has its own CONS. Please research on this.
You may use REST API, but design your strategy to batch the maximum records in a single payload instead of smaller chunks, e.g. sending 10 different API calls with each 20 records, accumulate the max allowed payload and call Marketo API once.
The access token is valid for 1 hour after authenticating.
Marketo's Bulk API can be helpful in regard to rate limiting as once you have the raw activities the updates, etc on the lead object can be done without pinging marketo for each lead: http://developers.marketo.com/rest-api/bulk-extract/ however be aware of export limits that you may run into when bulk exporting lead + activities. Currently, Marketo only counts the size of the export against the limit when the job has been completed which means you can launch a max of 2 concurrent export jobs(which sum to more than the limit) at the same time as a workaround. Marketo will not kill a running job if a limit has been reached so long as the job was launched prior to the limit being reached.
Marketo has recently upgraded the maximum limit
Daily Quota: Subscriptions are allocated 50,000 API calls per day (which resets daily at 12:00AM CST). You can increase your daily quota through your account manager.
Rate Limit: API access per instance limited to 100 calls per 20 seconds.
Concurrency Limit: Maximum of 10 concurrent API calls.
https://developers.marketo.com/rest-api/

Unclear Google Analytics API quota restrictions

I've been recently fixing my application that apparently reached some GA quota
limitations and I've found a couple of things that were not clear to
me:
Does the 4 concurrent requests limitation apply per application,
per web property or anything else?
If we break the 10 requests in any given 1-second period or 4
concurrent requests limitation, how long does it take before GA stops
responding with 503 ServiceUnavailable error?
Does quota per application refer to the application name string
only? We are running two different web application using different GA
application string. Both apps connect GA API from the same IP address.
Can we expect the quota per application is calculated for each
application string separately in this case?
Are the status codes sent with 503 ServiceUnavailable response
documented anywhere? Can we be sure that rateLimitExceeded refers to
the 10 requests per second limitation? How can I found out the cause
of an 503 response?
Btw is it possible that a stronger quota restrictions than documented
may take effect sometimes?
For example, is it possible that GA replies with 503 ServiceUnavailable
response just after 6 fast but subsequent requests or just because of any
other undesired behavior of a client application that's not included
in the documentation?
Regards,
Pavel
It's just been answered by Nick in the GA Google Group.
Main points: the 10 qps and 4 parallel requests limitation count per IP, even an application running on a different machine in the same network may be counted.
I've submitted a documentation bugreport to the GData issue tracker.

Resources