Yesterday (June 22nd, 2020) at 2pm Pacific, we started getting back 429 errors for all our requests to the Google Calendar V3 API. The body returned is html that asks the user to fill out a captcha.
These errors are not the standard 403 errors you get when you hit Google Calendar quota limits. Also, we checked, and we haven't hit Google Calendar API limits. So I assume this rate limiting is happening at a different level.
We fixed the issue yesterday by changing the IP address that we are sending these requests from. Unfortunately, again at 2pm today (June 23rd, 2020) the problem started happening again.
We are not getting these errors back from the Google Address Book API. It just seems to be Google Calendar API.
Has anyone else noticed 429 errors from Google Calendar API over the last few days? Or is Google listening and might be able to help?
Thanks!
There seems to be a new issue that has been filed on June 19th on Google's Public Issue Tracker
Seems like several users are affected, but the issue is reported to be currently investigated.
I recommend you to "star" the issue in order to stay up to date to its current state.
In the meantime, since the 429 error seems to be related to rateLimitExceeded, you can try to workaround in the same way like for 403 errors, that is e.g. implement exponential backoff as described in the documentation.
I had had the same issue with Google Calendar API today. HTTP Code 426 and HTML page with captcha in the response body.
This decision had helped me
If you use python you need to replace
build('calendar', 'v3', http=creds.authorize(Http()))
with:
DISCOVERY_DOC = json.load(open(os.path.join(SCRIPT_DIR, 'calendar-api.json')))
googleapiclient.discovery.build_from_document(DISCOVERY_DOC,http=creds.authorize(Http()))
calendar-api.json you can download from this link
Related
We are building a tool to off-board existing employees, including clearing their calendars of all existing events. When querying the /calendars/{calendarID}/events/ endpoint, we are occasionally getting a 500 - Stack limit exceeded error. We're only generating a few dozen to hundred requests, so we don't seem to be hitting any rate limits, which appear to be 10k per day; additionally, it's only intermittent, rather than a failing continuously, as a rate limit would generally cause. Anyone familiar with this error?
You can find all the Calendar API related errors by checking this link here.
As for the error message 500 - Stack limit exceeded you are receiving, it looks like the issue might in fact be coming from somewhere else.
You can also test the Calendar API by using the Calendar API Reference here.
Reference
Calendar API Errors;
Calendar API Events:get.
I'm having an issue with an Analytics API batch request that I am doing, it was working and now it isn't without me changing anything. I know Google are making changes to their batch endpoints and I believe this is what is causing my errors.
https://developers.googleblog.com/2018/03/discontinuing-support-for-json-rpc-and.html
I am using the .NET client library with the AnalyticsService. Having read through the link above I'm fairly certain I've done what is needed for my batching to continue to work.
Here is a screenshot of the .NET instructions
I've upgraded all Google libraries to the latest versions, I've checked the AnalyticsService object and can confirm the BatchURI is no longer the Global HTTP Batch endpoint www.googleapis.com/batch, it is showing as https://www.googleapis.com/batch/analytics/v3, but I am still getting 400 Bad Requests. Is there something else that I am missing, or do I have to wait until the 12th of August when Google say the switch will be complete?
Thanks
Update: I created an issue on GitHub, apparently it is an internal issue, currently waiting for a fix, see here to keep updated:
GitHub Issue on .NET client library
I had an application using the Google Vision API running fine for nearly a year. It wasn't running for a few months and when I ran it recently, it always returned 400 errors. I went to the account settings and noticed my key was gone. I entered a new one and updated my program. However, it still always gets a 400 error. It does so no matter what key I use (valid or not).
I looks like it isn't accepting the key.
Any idea what the issue might be?
Any way to test that a key is valid?
We are collecting Google Analytics data off of the GA API for various accounts. In most instances it works without a problem, but for one account we keep getting an unknown 502 error with the following response:
<p><b>502.</b> <ins>That's an error.</ins>\n <p>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds. <ins>That's all we know.</ins>\n"enter code here
Once we hit 10 of these (as per the error limit) we are kicked into 429s because we've reached our error quota limit.
There is no proxy in (except ostensibly on the GA side) between us and this works without problem for most accounts but this one.
This sounds like a bug that should be sent to the issues tracker.
There are a few bugs open bugs that are about 502 errors as well. Your issue may be the same as one of these.
I've created API with Kimono Labs, to generate RSS feed from a website. It is working ok, crawling data every hour, however every several days it just stop working. No errors, nothing. It the crawl history i can see, that previous crawls was successful, and then API just stop crawling the data. Until i launch manual crawl. Then API start working again, but only for a several days. And then all again, it stops, i initiate manual crawl, it's working for some time. What can cause such a behavior?
It's intended behaviour described in every API's under (?) popover:
<p>Auto-run frequency <span class="icon-question-circle" data-html="true" popover="Specify how often this API will automatically fetch new data from the target page(s). APIs are limited to 1 URL for a hourly auto-run, <1000 URLs for a daily auto-run, and <10,000 URLs for a weekly auto-run."></span></p>
Anyway, it was a kimono issue, that is fixed now. I've got an e-mail from support
This is a crawling bug that we've now implemented handling for.
We are running a script that will check for queued scheduled crawls every hour
and start them if they are not running.