Getting 404 on iframe_api. Are we being rate limited or is the API actually hosed? - youtube-iframe-api

We're displaying embedded videos using the iFrame API on a page that's hit by selenium-driven automation ~2 times/10 mins. Our tests have started failing because we're getting intermittent 404s from https://www.youtube.com/iframe_api.
This has been going on for two days now. Are we being rate limited here? Or does the problem lie on Youtube's end?

Related

Why is my website experiencing random slow api requests?

I have a VB.NET/Vue website hosted on an internal IIS 8.5 Windows 2012R2 Server. Our company has about 30 users using the site at any given time. The users are experiencing random delays throughout the day and on some days there's no delays (site works great most of the time). What I'm looking for is any suggestions on where to start looking to solve the issue. Here's what I've found so far.
User goes to site and initiates an api request from the UI
User sees a loading icon for anywhere up to a minute or so while the request returns
The request eventually reaches the server after some time and executes really fast within milliseconds and returns the response to the user
By this time, many users have already refreshed the page making new requests that succeed on page load. For the users that are patient and wait for the response, it eventually returns the response.
Here's some screenshots:
So to sum everything up, there are several users experiencing delays on a daily basis.
Some days we don’t have any delays, but most days we have several users experiencing multiple delays of several seconds to 30 seconds to 1 minute.
I’ve found all this using LogRocket and NewRelic and what is happening is all these requests are completing within milliseconds, but the request doesn’t seem to reach the server for some period of time.
I’ve been monitoring the CPU/Memory/Network on these servers and it all seems fine to me during when these issues occur.
It seems that the problem lies between the users computer and whatever hardware/software exists before reaching the web server.
Update here... Found that the problem is occurring on the users computer in all these instances. Using google chrome's performance api, I was able to track timing info for these requests and found that the problem is in the fetchStart. So whatever is happening here is the cause of the issue.
Example below:
entryType: resource
startTime: 1119531.820000033
duration: 56882.43999995757
initiatorType: xmlhttprequest
nextHopProtocol: http/1.1
workerStart: 0
redirectStart: 0
redirectEnd: 0
fetchStart: 1119531.820000033
domainLookupStart: 1176401.0199999902
domainLookupEnd: 1176402.2699999623
connectStart: 1176402.2699999623
connectEnd: 1176404.8350000521
secureConnectionStart: 1176403.6700000288
requestStart: 1176404.8549999716
responseStart: 1176413.5300000198
responseEnd: 1176414.2599999905
transferSize: 15145
encodedBodySize: 14884
decodedBodySize: 14884
serverTiming: []
workerTiming: []
fetchStart is at 1119531.820000033, then requestStart is at 1176404.8549999716 so the problem is something between fetchStart and requestStart. Still looking into what is causing this.
In 2022, we are experiencing something very similar with a small fraction of our customers. There is a significant gap between the timing api requestStart and the startTime. This gap can be up to 8 minutes -- I admire the patience of customers waiting that long. The wait periods are also close to multiples of a minute.
In our case, it appears that there is a (transparent?) proxy between those browsers and our server infrastructure which appears to be triggering the problem. In particular, it forces a downgrade of HTTP/2 to HTTP/1.1. Whitelisting our website in that proxy does solve the problem. This isn't a very satisfactory solution, but it does make the customer happier!
[UPDATE]
In our case, it turned out that we were sending a Content-Length header with a non-zero value on a 304 response. This is technically invalid and it caused problems with the proxy. This happened because of the Django CommonMiddleware which always puts a content-length header on responses. The solution was to add a new piece of middleware that strips out the content-length (and content) on a 304 response.
It turned out that the content was already being stripped by our nginx frontend, but it is better not to generate it in the first place.
And what was the content? -- in our case, it was the 4 characters 'null'!

Failed Google Page Speed Test with Lighthouse returned an error: FAILED_DOCUMENT_REQUEST

When I check (https://www.readonlinenewspaper.com) site speed using PageSpeed Insights.
I am not able to see and results and get an error message like below:
Lighthouse returned an error: FAILED_DOCUMENT_REQUEST. Lighthouse was unable to reliably load the page you requested. Make sure you are testing the correct URL and that the server is properly responding to all requests. (Details: net::ERR_CONNECTION_FAILED)
It is probably caused by one of two things
1. The site just takes too long to load.
Your page takes well over 40 seconds to load (on a high speed desktop connection, albeit in the UK and I am guessing this is somewhere else due to the long delay on requests.) so Page Speed Insights thinks it is broken as the page never completes loading within its timeout period.
Your country flags are the main cause of this, you should instead consider a CSS image sprite, or inline SVGs as the total of 438 requests on your page is so high you will never get good performance (generally only 8 requests can be made at once so that means you have over 50 round trips to your server for resources.)
If each set of eight resources takes 200ms to complete that is 10 seconds of latency (dead time waiting for a response) on its own, for me they were taking 800 to 1000 ms each!
This is particularly slow so perhaps there is something wrong with your hosting configuration or website setup? (You aren't storing the flag URLs in the database and looking them up one at a time in a loop by any chance are you?).
2. Hotjar
For some reason Page Speed Insights doesn't seem to play well with hotjar.
It is something to do with websockets but I never got to the bottom of it I just know that this is a problem I see often when people use hotjar and it is related to web sockets (maybe something to do with the wss:// protocol or their implementation).
Try disabling hotjar and run the test and see if it works then (perhaps test on another page when investigating this as it is only the homepage that is unbearably slow to load because of the flags as per point one).
p.s. the resource online-newspapers-banner-02.jpg is not being loaded over HTTPS so fix that, nothing to do with your question I just noticed the site was showing as "not secure" and I think that is the cause.

Why my score drooped to 20 from 80 over night? (I didn't do anything)

Google Pagespeed: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fsuper-zava.co.il%2F
My URL: https://super-zava.co.il/
I talked to my host support (BlueHost) and they told me that the problem is not related to their server. I didn't touch anything.
It seems like it is related to your ISP.
The problem seems to be related with the time that Google's services is taking to fetch the page, so it's either Google's fault or your ISP's, as mentioned above. Google's saying the first request took ~ 5 seconds for them (with the first byte taking around ~0,35 to be received).
The page loads just fine here, perhaps they have changed Google's IP priority.
I reckon it's not because of you or your site.
From me, the latency to your server is around 158 ms, and is taking around 1 second to load the page.
What you can do is to put your site behind a WAF like Cloudflare.
as reported past week on pagespeed insights discuss google groups:
https://groups.google.com/forum/#!topic/pagespeed-insights-discuss/luQUtDOnoik
it seems that this is a problem (or a change in programming api) of PageSpeed Insights web itself

Kimono Labs API suddenly stops working

I've created API with Kimono Labs, to generate RSS feed from a website. It is working ok, crawling data every hour, however every several days it just stop working. No errors, nothing. It the crawl history i can see, that previous crawls was successful, and then API just stop crawling the data. Until i launch manual crawl. Then API start working again, but only for a several days. And then all again, it stops, i initiate manual crawl, it's working for some time. What can cause such a behavior?
It's intended behaviour described in every API's under (?) popover:
<p>Auto-run frequency <span class="icon-question-circle" data-html="true" popover="Specify how often this API will automatically fetch new data from the target page(s). APIs are limited to 1 URL for a hourly auto-run, <1000 URLs for a daily auto-run, and <10,000 URLs for a weekly auto-run."></span></p>
Anyway, it was a kimono issue, that is fixed now. I've got an e-mail from support
This is a crawling bug that we've now implemented handling for.
We are running a script that will check for queued scheduled crawls every hour
and start them if they are not running.

Google Analytics Page Load Times vs Pingdom Page Load Times.

I am examining the page load time numbers in GA and Pingdom. My avg via pingdom is consistently 3 seconds, or around that. My page load time in GA is consistently around 10 seconds. Can anyone explain the technical reason for this difference. Please.
Any Reference to this information would be helpful, I haven't been able to find a straight answer.
This is an old question, and you've probably moved on, but Pingdom only tests the response time from the server (how long it takes the server to return a 200, 4XX or 5XX status), not even the time to receive the HTML doc; while Google Analytics shows the load time of the entire page, including all content and every asset loaded.

Resources