I'm using Azure Text Analytics for Sentiment Analysis. I was wondering if the API is limited to 100 requests per minute or if there is any request limit for the service.
When I tried to request more than 100 times within a minute, the API returns an empty document.
As mentioned at https://learn.microsoft.com/en-us/azure/cognitive-services/Text-Analytics/overview:
Limits
Maximum size of a single document: 5,000 characters as measured by String.Length.
Maximum size of entire request: 1 MB
Maximum number of documents in a request: 1,000 documents
The rate limit is 100 calls per minute. Note that you can submit a large quantity of documents in a single call (up to 1000 documents).
This poster has similar issue and developed their own ratelimiter for cognitive services.
Related
I'm testing out the Here API for geocoding purposes. Currently in the evaluation period, some of my tests include geocoding as many as 400 addresses at a time (later I may rarely hit 1000). When I tried this with google maps, they would give me an error indicating I'd gone over the rate limit, but I have not gotten such an error from Here API despite not limiting the rate of my requests (beyond waiting for one to finish before sending the next).
But in the Developer FAQ the Requests Per Second limit is given as:
Public Plans Business Plans
Basic 1 N/A
Starter 1 1
Standard 2 2
Pro 3 3
Which seems ridiculously slow. 1 request per second? 3 per second on the highest plan? Is this chart a typo? If so, what are the actual limits? If not, what kind of error should I expect if I exceed that limit?
Their documentation states that the RPS means "for each Application the number of Requests per second to HERE Services calculated as an average (number of Requests during a period of 5 minutes) to all of the APIs used to access the features listed for each subscription plan".*
They say later in the documentation that quota is calculated monthly: "When a usage record is loaded into our billing system that results in a plan crossing its monthly quota, the price applied to that usage record is pro-rated to account for the portion that is included in your monthly quota for free and the portion that is billable. Subsequent usage records above your monthly quota will show at the per transaction prices listed on this website."*
Overages are billed at 200/$1 USD for Business or 2000/$1 USD for Public plans. So for the Pro plan, you will hit your limit if you use more than 7.779 million API requests in any given month, any usage beyond that would be billed at the rates above.
Excerpts taken from Developer FAQ linked above.
I am trying to find the top most followers based on the number of tweets about a brand named Maybelline on Twitter. The brand has about 600K followers and when I try to retrieve them the code keeps running for hours. So is there an efficient way to do this? I am using the below code after setting up twitter authentication. I want all the followers ( top 50) who tweeted the most about Maybelline.
user<-getUser('Maybelline')
user$toDataFrame()
followers<-user$getFollowers()
Thanks
While working with the Twitter API, it's useful to familiarize yourself with their limits. You have two main limits for a GET request, one is a Rate Limit (how many requests you can make in a 15 minute timeframe) and the other is the limit of how many results a certain call gives you.
In your scenario, you are using the GET followers/list endpoint from their API. You can read the docs for that here. That endpoint returns a list of followers and is limited to 20 followers per request and 15 request per 15 minutes, meaning that in a 15 minute timeframe, you can only retrieve 15*20 = 300 users. So to retrieve 600K followers that would take a very long time ( 30K minutes = 500 hours = ~21 days ).
It would be more efficient to use the GET followers/id which returns up to 5K user ids with each request with the same 15 request per 15 minute rate limit. Twitter API reference here. You can use this in conjunction with the Twitter GET users/lookup which returns up to 100 users per request and has a rate limit of 900 requests per 15 minutes. This means it would take 2 hours (at 75K users per 15 minute increment) to get 600K followers id's. And less than 2 hours to get the user objects ( at 90K users per 15 minutes ).
The rate limits can change depending on how the package you are using does authentication. If you are logging in as a Twitter user with your credentials, then the above rate limits are correct. If you are using only application credentials, then getting the followers will take 3x longer as users/lookup has a rate limit of 300 requests in that case or 30K users per 15 minutes. This answer has some good information on rate limits.
When using https://www.linkedin.com/countserv/count/share?format=json&url= to access an article's sharecount, is there an api daily limit?
We noticed that the time it was taking to retrieve count data was taking as much as 20 seconds on our production server. We added logic to cache the number of counts, and the 20 second delay stopped the next day. We are left wondering though what the limit might be (we can't seem to find it in your documentation).
I'm scraping some tweets using the twittR package. It all works fine, but when I want to scrape a significant amount of tweets I get the following message:
[1] "Rate limited .... blocking for a minute and retrying up to 119 times ..."
From reading [(https://dev.twitter.com/streaming/overview/request-parameters)] I understand there's a maximum of requests that can be scraped. What I do not understand however is that sometimes I already hit the wall when I crawl 20 tweets and sometimes I can get up to 260 before it's limited.
Any thoughts on what the rate of tweets you can gather per time span is?
Rate Limits, and the way they function works differently from API call to API call, What call are you making specificaly? If you are just interested in gathering Tweets related to a subject, I'd suggest using the streaming API (streamR) as it requires only 1 API Call and allows you to stream for an indefinite amount of time.
As I am not coming close to 100000 queries per day I am assuming that Google is referring to the Freebase 10 requests per second per user limit. (I am passing in my Goggle Key)
If i am running a query that crosses multiple Freebase domains is that considered more than one request? Or is a single query considered one request regardless of it size?
thanks
Scott
Yes, it sounds like you're exceeding the per/second rate limit. You'll need to introduce some delays in your application so that you don't exceed the limit. The rate limit only applies to HTTP requests so you can query as much data as you like as long as it fits in one request.