From: next/link
You can see that the <Link> component from next/link enables client-side transitions and link prefetching, which are great features, but maybe not for all cases.
Please see the caveat I've run into. Let's say I have the following pages:
Home - Some landing page with a nav bar
Latest - Here I can see my latest posts
Admin - Here I can add more posts
The Latest page from the example above uses getStaticProps with revalidate. Something like:
export const getStaticProps : GetStaticProps<HomeRoute> = async () => {
const preloadedState = await getPreloadedState();
return({
revalidate: 1,
props: {
preloadedState
}
});
};
In theory, after 1 second, it should send the last stale response for the next request and trigger a new static regeneration to be served for the subsequent requests. After 1 second, the process repeats and you get fresh data at least after every second, which is pretty much immediately.
Now, see the caveat I've run into with next/link:
User lands on the Home page. There is a Link on the nav bar pointing to Latest. That link will be prefetched by next/link.
In some other browser, an admin goes to the Admin page and adds one more post (which should appear on the Latest page at some point).
Now user clicks on the Latest page link. The new post is not there.
Clicks on Home again. And clicks again on Latest. New post is still not there and never will be.
The transitions in this case are blazing fast, which is nice. But from my experience so far, I think that that user is locked inside a version of my website where the new post will never be available, because that 1st prefetch happened during a time where the new post didn't exist.
The only way that user will ever see the new post is if he/she presses F5 to do a full website reload. And it might be necessary to refresh twice, because the 1st one might return the previous stale version while triggering the regeneration for the next one.
I mean, what is the workaround to this issue? Should I not use next/link for pages that contain any dynamic data? Should I just use normal <a> tags?
UPDATE
From: https://nextjs.org/docs/basic-features/data-fetching#statically-generates-both-html-and-json
From the excerpt above, we can see that indeed, client-side transitions will not trigger a page regeneration, because they'll not call getStaticProps. They only fetch the pre-built JSON object for the page to use as props.
AFAIK, it means that you'll be locked to the version of the page that existed when you first visited the website. You can go back and forth and nothing in the pages would change, because the JSON data is probably cached on client anyway.
PS: I've tested this (like I've mentioned in the question above) and this is exactly what happens.
So how to workaround that? I would like for users that keep an open tab of my website to be able to get updates for the pages when they navigate from one page to the other.
A POSSIBLE SOLUTION
Set some kind of idle time counter, and if the user gets like 10 minutes of idle time (it means that they left the tab open). Whenever he comes back and do some action, I could refresh the whole website to make sure they get the new version of the pages.
Has anyone faced that problem before?
I've posted this very same question in several forums and this is the response I've got:
It seems what you described is true. next/link caches results in the client-side and your visitor will not fetch a revalidated result out of the box unless there is a full-page reload.
Depending on the likelihood of content changes, you might want to use <a> instead or you can look at some client-side content reload strategy that kicks in after mount and query data source for updated content.
Given that fact, I'll stick to using next/link and client-side transitions. But I'll also use something like a setInterval() to do a full website reload from time to time, so I'm sure my users will keep getting revalidated pages eventually.
I have a wordpress website running and i am using W3Total Cache Plug-in to make the site load faster. When i scan the site in Google Page Speed Insight, i noticed i am getting in-consistent scan results. I have a Facebook Messenger chat floating on the webpage and a google map. Since these two gave me Reduce the impact of third-party code Warning i have made changes so that these two will be loaded only after the DOM has loaded completely. Actually i have used jQuery SetTimeOut for this. I actually managed to remove the warning from the result by doing this. But now and then i noticed the same warning coming back in, even if i have made adjustments. if i scan the site two or three times frequently the warnong may go off, but will be back again once i try after a while.
These are the result of frequent scans. Do you guys have any idea about what would be going wrong here ? I spent a lot of time searching but couldn't get my head around it.
With the classic HTTP/1.0 Hypertext Transfer Protocol, resources like Javascript, CSS, HTML, images etc. are loaded in a request / response pair, meaning the browser sends a request to request for a resource (be it CSS, Javascript, etc.), and will wait for the response to come back before it requests another resource. Even though they are loaded in a request / response pair, the request and response pairs are not always going to follow the same sequence strictly, due to randomness in network latency, server response time, the load of the server is currently experiencing, etc.
With HTTP/2 and HTTP/3, the newer versions of HTTP protocols, instead of waiting for a response to come back before sending another request, the requests can be sent all at once. I checked your website and saw that your website is using HTTP/2 and HTTP/3. With HTTP/2 and HTTP/3 protocols, since requests can be sent all at once, it can contribute to a degree of "inconsistency" as well, among other things. Even with HTTP 1, there's always a degree of randomness since there are many factors that play into it like the server response time is going to be different, the network latency is going to be different, etc.
To illustrate this, if you are using the Chrome browser, open the "Developer Tools" tab by clicking the three dots on the very top right corner of the browser, and then click "More Tools" and then click "Developer Tools". Alternatively, you can do "Ctrl+Shift+I" if you use Windows or "Command + Option + I" on Mac. Then go to its "Network" tab, and refresh the page. Each time you refresh the page, the resources are loaded a bit different in sequence:
In the image above, using the Google Tag Manager UA-174548329-1 Javascript as an example (I know it's probably not Google Map), it is loaded as the 4th resource.
When I refresh the page again, your Google Tag Manager UA-174548329-1 Javascript is loaded as the 11th resource:
When the page is being loaded or if you run it on Google's PageSpeed Insight, the main thread is sometimes busy, sometimes not, due to the nature of the randomness of the request and response. Your main thread is also constructing the DOM, and doing a lot of work. Sometimes it's getting blocked by render-blocking resources, such as Javascript.
Javascript is always going to block the Critical Rendering Path by default. Without looking at your Javascript SetTimeOut it's hard to say what implementation you are using to delay your Javascript but it's safe to assume that it probably doesn't help with clearing the critical rendering path. Instead of using SetTimeOut, you should use defer or async.
You can look more into the Critical Rendering Path here. The main thread is the main process your browser is running to do most of the work to process and render the CSS, Javascript, HTML on a page. The critical rendering path is "the sequence of steps the browser goes through to convert the HTML, CSS, and JavaScript into pixels on the screen". - Quoted from Critical Rendering Path. The critical rendering path is the sequence of your Javascript, HTML, CSS, images, and other resources being downloaded and rendered. It requires a lot of knowledge to optimize your critical rendering path and it's no easy job. However there are two attributes you can try to use in the script tag, namely "async" and "defer" to control when your Javascript will be executed.
Take a look at this image:
Credit: Growing with the Web
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/loading-third-party-javascript/?utm_source=lighthouse&utm_medium=unknown
As you can see, you can try putting the async attribute in your script or the defer attribute in your script tag and see if it helps.
With 'async' attribute in the script tag, it means that your Javascript will be executed asynchronously as soon as it's downloaded. The blue bar under the <script async> as shown in the image shows that the script is downloaded at the same time when the HTML is being parsed as well, since the green bar and the blue bar are seen executing in parallel. As soon as the downloading of the script is finished, the script is then executed. At this point, the HTML parsing is paused until the script is finished executing. Whereas without the 'async' attribute, your HTML parsing will be paused (or blocked) when the script is being downloaded and executed.
With 'defer' attribute in the script tag, it means you are deferring the execution of your Javascript until the DOM is finished parsing. Although it will be downloaded as soon as the browser receives the javascript resource, but the downloading won't block the HTML parsing.
In summary, you can use the 'async' attribute in your third party script to 'unblock' your main thread to a certain degree, that they will be downloaded and executed in the background while your DOM is being rendered. This will speed up the main thread a bit. However one caveat is that the execution is still going to be render-blocking. A very important thing to note is that by using the 'async' attribute, be prepared to see some possible erratic behaviors of the page because, more 'inconsistencies' might happen as now the Javascript can be executed anytime in the rendering path and therefore if something needs to happen before or after the script, you might break the flow and the logic of it.
Or you can use the 'defer' attribute in your third party script to tell your script to be executed only after the DOM has been loaded completely. This can only speed up the process very little, only a little because the downloading of the script can now happen in parallel while the HTML parsing is taking place vs using the default script tag without specifying defer or async, but the execution is still going to take an overhead on the main thread.
As per Google's support document, there's a section on How do you load third-party script efficiently?, here are a few ways:
"
Load the script using the async or defer attribute to avoid blocking document parsing.
Consider self-hosting the script if the third-party server is slow.
Consider removing the script if it doesn't add clear value to your site.
Consider Resource Hints like <link rel=preconnect> or <link rel=dns-prefetch> to perform a DNS lookup for domains hosting third-party scripts.
"
Other methods:
Check out how to compress, minify, or combine various Javascript files into one file (if you are using Javascript in the form of files). Use GZIP compression to compress your Javascript, CSS. Also check out how to load third party scripts using a CDN (Content Delivery Network / Content Distribution Network), among others.
Updated Aug 12, 2020:
In response to your comment, since you mentioned that your third party scripts are coming from plugins that you can't code the 'async' or 'defer' attribute into the script tags, you can consider adding this before your other scripts:
<script>
// If your script tag has an id, use either one below:
document.getElementById("your_script_tag_id").async = true;
document.getElementById("your_script_tag_id").defer = true;
// If your script tag has a class name, use either one below:
document.getElementsByClassName("your_script_tag_class_name")[0].async = true;
document.getElementsByClassName("your_script_tag_class_name")[0].defer = true;
// If for once and for all scripts, use either one below:
document.getElementsByTagName("script")[0].async = true;
document.getElementsByTagName("script")[0].defer = true;
</script>
You can also check this out: Async JavaScript, this allows you to defer or async your Javascripts including the third party ones.
From what I can see you have set the "delay" to 3 seconds on Facebook Messenger chat. However your site takes a lot longer than this to load the initial content.
Your site will often not have loaded the "above the fold" content within 3 seconds due to things like network latency, load on your server etc.
For this reason the Facebook Messenger chat script is getting loaded at a point where the CPU may or may not be busy. For things like "Total Blocking Time" this is important as that is listening for when the CPU has it's first quiet period to work out when the page is usable.
For working out "impact of third party code" it is looking at when the CPU is working while trying to render the "above the fold" content, hence why sometimes it shows as an impact and other times it does not as sometimes your above the fold content has loaded sufficiently before the Facebook Messenger is initialised.
Additionally you have to consider when your main JS file containing the timeout is loaded, sometimes it will be loaded sooner depending on latency etc. so this will impact the time the fbDiv is added as well.
There is a lot to cover so to simplify the answer (as there is an awful lot to explain as to why this happens) is to increase the delay on Facebook Messenger or only have it load on a button click.
For example you could have a button that says "chat with us" and then use the click event to load facebook messenger (and hide the "chat with us" button). This would be my recommendation
Alternatively looking at the load speed on your site you could set the delay to about 7 seconds and it would then (probably) be consistent.
I have a task to list all pages which are opened at that moment and show how many people are on that page.
I am looking for a way to make that happen without keeping any db records or saving information on a text file or smth like that. (Not seccessarily, then. Of course I am going to save that info to a dB, I just wanted to the logic of catching opened page addresses.)
I can of course keep track of every page which are opened till that time, but I want the page address appear on the page when someone opens that address and disappear when user is no longer browsing that address.
Can you give me some ideas how to make that happen using ASP.NET?
Note: I am using web forms with asp.net 4.5
Thanks!
"I just wanted to the logic of catching opened page addresses"
Use javascript in a timed loop (onload and then every 30 seconds perhaps) on every page, to asynchronously post to a page on your server. It should send information identifying the page. This will give you a good idea of how many people currently are on this page.
Store this information in a db in your code-behind, and use this information to report as you wish.
Of course if a user leaves their browser open on one of these pages or opens another tab it will still be reporting as 'open'.
To get the current url in javascript you can use:
var pathname = window.location.pathname;
In google analytics you can see what pages are being used in near-real time.
Why not use that to solve this issue - it's easy to setup.
I have to mock up a ticking dashboard which is part of a proposal. I am looking for ideas other than the HTTP Refresh option that I am thinking of. The objective is to quickly mock up a look and feel and a working dashboard that ticks over. It only had to provide new content every five seconds. EG there are a bunch of KPI's and their outputs which are percentages have to be updated..
A simple bunch of HTML pages using HTTP Refresh is on my mind. Is there a better option anyoine can think of. EG can HTML5 do this better? Is CSS an option? Thank you in advance for any ideas
I would be going for an ajax call back to the server to get the latest update and then embed that wherever it needs to go - you could set the ajax function on a timer to run every 5 seconds or 1 second or whatever. This way your entire page is not being refreshed, and additionally you can be calling back to the server for a new update even while the previous on is still being rendered.
Downside is that you won't have a page history (i.e. the users will not be able to navigate 'back' to previous ajax updates) unless you explicitly create one; I can't see that being necessary though.
Please post a comment if you need more info about ajax.
I have created a little flight simulator based on milktruck demo. The problem is that the plugin crashes every couple of refreshes. I have no idea why that is. It happens on all browsers
You can see the game here
http://www.stepupforisrael.com/plane-game/0205/Mingame.htm
Any clue will be welcome....
I see, the issue is that after a few refreshes you get the error ERR_BRIDGE_OTHER_SIDE_PROBLEM and the detail is bad status
I believe this is a bug with how you are loading the plug-in itself - I can reproduce it using a very basic set up. Also, It looks as if there is already a bug report for it here.
http://code.google.com/p/earth-api-samples/issues/detail?id=736
Form reading around, and testing, it appears that loading the plug-in using the Google Loader resolves the issue. To do this.
Firstly, remove the onload 'init' call from the body element.
So that
<body onload='init();' onunload="//GUnload()">
becomes
<body onunload="GUnload()">
Then use the Google loader callback to handle the initialization. To do that place this line at the end of your script block, right before the closing element.
google.setOnLoadCallback(init);
My thinking is that the onload event occurs immediately after a page is loaded. However, sometimes the plugin has not finished authentication when this event occurs, hence the intermittent error. You can read more about this here: https://groups.google.com/forum/?fromgroups#!topic/google-earth-browser-plugin/vvXKanCJbJU
We had our own piece of pain with that error and we used solution described in a different answer (GUnload etc) - without success. The problem was solved when we moved our code from some murky hosting we used - to Amazon EC2. The error stopped immediately. Was it caused by a timeout between our original host and Google servers? We don't know anything except what we did saved us...