next.js ISR creates response time up to 60 seconds - next.js

we have a reached a limit when it comes to build times within our CI/CD. We have pre-rendered 9k pages and we have switched to ISR that we can update and generate on the fly the less relevant pages.
During small load test we have seen a huge drop of the overall performance of the whole next.js app. Weird was: All servers never died. The load test also end up in timeouts, which the webserver somehow queued. We have seen build time on some pages up to 60 seconds and after then it were delivered and were available on cache.
has anyone faced a similar issue? Also someone an idea how we can increase the horse if we use as CI/CD bitbucket to decrease build time?

Related

Failed Google Page Speed Test with Lighthouse returned an error: FAILED_DOCUMENT_REQUEST

When I check (https://www.readonlinenewspaper.com) site speed using PageSpeed Insights.
I am not able to see and results and get an error message like below:
Lighthouse returned an error: FAILED_DOCUMENT_REQUEST. Lighthouse was unable to reliably load the page you requested. Make sure you are testing the correct URL and that the server is properly responding to all requests. (Details: net::ERR_CONNECTION_FAILED)
It is probably caused by one of two things
1. The site just takes too long to load.
Your page takes well over 40 seconds to load (on a high speed desktop connection, albeit in the UK and I am guessing this is somewhere else due to the long delay on requests.) so Page Speed Insights thinks it is broken as the page never completes loading within its timeout period.
Your country flags are the main cause of this, you should instead consider a CSS image sprite, or inline SVGs as the total of 438 requests on your page is so high you will never get good performance (generally only 8 requests can be made at once so that means you have over 50 round trips to your server for resources.)
If each set of eight resources takes 200ms to complete that is 10 seconds of latency (dead time waiting for a response) on its own, for me they were taking 800 to 1000 ms each!
This is particularly slow so perhaps there is something wrong with your hosting configuration or website setup? (You aren't storing the flag URLs in the database and looking them up one at a time in a loop by any chance are you?).
2. Hotjar
For some reason Page Speed Insights doesn't seem to play well with hotjar.
It is something to do with websockets but I never got to the bottom of it I just know that this is a problem I see often when people use hotjar and it is related to web sockets (maybe something to do with the wss:// protocol or their implementation).
Try disabling hotjar and run the test and see if it works then (perhaps test on another page when investigating this as it is only the homepage that is unbearably slow to load because of the flags as per point one).
p.s. the resource online-newspapers-banner-02.jpg is not being loaded over HTTPS so fix that, nothing to do with your question I just noticed the site was showing as "not secure" and I think that is the cause.

502 BAD GATEWAY GET / TASK TIMED OUT AFTER 10.01 SECONDS

I am using Next.js and express as front end and back end server. Next.js hosted on the Zeit Now, express app hosted on Heroku.
If I go to express app, I can make sure that it's working correctly and its connection to mongodb works fine as well.
When I hit index page of Next.js through Zeit, it seems to be hanging on the GET / tasks more than 10 seconds.
I am only calling 3 end points just GET methods from index.js of Next.js app. This shouldn't be hanging the whole application.
If I go to my server independently, which only takes less than 3 seconds or so to give back JSON data.
I also looked at function tab Zeit provided, but it won't show what exactly serverless function was failing.
So it is hard for me to debug this. I also set whitelist all IP from Mongo. So the database should be fine.
If anyone dealt with this before, please let me know.
My site is https://www.yaobaiyang.com
Issue happens unexpectedly, you may or may not see this error
I had the same problem on my website:
Check the limit on your plan https://vercel.com/docs/v2/platform/limits (Especially if you're in free plan, you will have some limits.
For the problem was an uncatch error, the lambda crashed and wait the timeout.
Through more understanding, the problem may appear on Heroku, Heroku's free plan is 1000 dyno hours, which is the usage time in general, and then within the 30-minute time limit, if there is no access, the server will go to sleep. . There may be a delay in reawakening, which usually takes longer than when it was active. If this is the problem, the solution is to use
Similar to pingdom or cronjob, a regular automatic request interface, request my Heroku periodically in less than 30 minutes to keep it awake.
Use a VPS like digital ocean or Vultr that runs 24 hours a day, 365 days a month, and then deploy my Node, Nginx, Http/2, etc.
Upgrade Heroku's plan to cancel the 30-minute sleep
Upgrade Zeit has more timeout.

Where can i find the list of Max and Min execution time of wordpress plugin's and APIs?

I have surfed the web looking for the list of the execution time of WordPress plugins, Apis and themes.
I have a scenario in which my client is using WPengine as their host and they don't want to exceed the best execution time offered by Wpengine which is 60 seconds. I'm using Avada theme which recommends execution time 300 seconds.
Know I'm not familiar with many WP APIs, theme and plugin, therefore, I was looking for a list which displays recommended execution time, or someone can share their experience with their execution time.
It is very unlikely that you are going to be able to find this information, and if you do, it is likely to be very inaccurate given that execution time is dependent on many different factors that can't be controlled for across users / hosts / environments. However, 60 seconds is a generous time limit - the default for PHP is 30 seconds. Note that this limit is for how long the server takes to generate the response, not how long the page takes to load. For example, a website might take 1 second to execute PHP in response to a website GET request, but the browser might take 120 seconds to fully load the page that the server responds with. If your server takes longer than 60 seconds to process your PHP in response to just a normal web request, you very likely have other very serious issues.
It seems like the main reason why the Avada theme is asking for 300 seconds is for the ability to bulk load the demo content, which is probably doing something like downloading a ZIP file, unzipping it, and processing the content. This should only have to be done once, and you should be able to get around it by manually importing via FTP.
Typically this limit is only an issue for things like that; importing or exporting lots of large files, loading lots of content at one page (for example, if you set posts per page in wordpress to 1000), etc. Or bad code - a FOR() loop that never exits should have an execution time of "infinity".

PCF auto drop and start application in non-buisness hours

I am new to pcf and working on a task for drop all instances on a particular time.
I need drop all apps instances during non business hours and bring it back before start of business hours?
This can be achieved by using App Autoscaler where you can automatically increase and decrease instances based on either of following-
No of Http request (Http Throughput).
CPU utilization.
Http latency.
This will automatically drop the instances during off hours and increases when no of request start to increase i.e. in business hours.
You could also test this by doing stress testing using tools like jitter which helps you to increase and decrease the load artificially on your app.

Meteor incrementally loads data on refresh, but not on route

I have a Meteor application, and among other things, it does a count of a collection on the client side. When I refresh the page, I see the count incrementally increase - it slowly ticks up from 0 to 3000. I assume this is because the data is published incrementally.
However, when I route to the same page from another, it waits for the data which takes a few seconds, and then the number 3000 appears. My guess is that it has do with several large queries being waited upon before the page renders, when the data is already completely published. The subscription manager package does not solve the problem by caching the data, which supports my thoughts. But my tests only help so much.
How can I achieve the same incremental loading on routing?

Resources