We have one peculiar issue in SSR rendering. Pages are being served from cache layer from second time in SSR rendering. We want to disable this for certain use cases. E.g If we change the product name in back office and try to access the product in storefront first time we get 301. But next time we get 200. So we always wanted to get 301 when product name gets changed/removed. We tried setting via header this.response.setHeader('Cache-Control', 'no-cache, no-store, must-revalidate'); But it didn't work. Can you please help us with this? #help
Disabling cache only for certain scenarios is not supported in Spartacus SSR optimization engine, and you would have to write a custom logic to support it.
For 5.0+ version we can consider to make the "return from cache on second render" feature optional. Can you please create a ticket for that on GitHub?
As a side note, it seems like you're using SSR as a live server, which is not recommended. We recommend putting SSR servers / nodes behind a CDN. If CDN caches 301, the requests hitting the CDN will always get 301.
Related
We have a Next.js site that relies heavily on ISR (incremental static regeneration) and uses webhooks to revalidate specific pages on-demand when their content has changed in our CMS.
However, we also want to keep some site-wide data in our CMS. Particularly we want our CMS to be the source of truth for our navigation header and for a global message banner that can show special notifications on occassion. (Ie, "We're closed today it's a special holiday")
If either of these "site-wide" things change in the CMS, we would need to revalidate every single path. Is there a way to do this without trying to keep track of every potential path ourselves and explicitly calling revalidate on them? It seems like the framework should help us here, but I can't find any mechanism that would revalidate everything at once.
Has anyone solved this or found good patterns for leveraging ISR with site-wide data like a navigation or global notification banner?
Thanks
Server Side Rendering (in Nuxt, but I think this is fairly universal) has been explained to me as:
When using Nuxt SSR, you request an URL, and get sent back a
pre-rendered HTML page. After loading, the browser starts the
hydration code, at which point the static page becomes a Vue SPA application.
I presume (perhaps incorrectly) that regardless that you are now in SPA mode, if you request a new page (eg. you are on "blog/ten-blue-things" you follow a NuxtLink to "blog/ten-red-things") the browser sends a new request to the server for this page, and effectively break SPA-mode? Or am I wrong?
I really hope the answer is that it does not make a request. In my scenario all the required data is very likely already available, and if not the page would make an API call to get anything missing to dynamically render the new page "blog/ten-red-things". Fetching a pre-rendered page after the first fetch is wasteful in my scenario where everything is already on the client (to clarify this further, this is a PWA offline-first app; I'm adding SSR for the SEO and social sharing of pages).
However, I also wonder that if it indeed does not make a request, does that (generally?) hold true for crawlers as well? I'd rather they DO make a separate request for each page in order for them to get the prerendered SEO-friendly version of the URL. If a crawler has js enabled, it may execute the hydration code, and then what?
Can anybody clarify this perhaps?
Does it matter for this anwser if instead of SRR we employ ISG (incremental static generation)?
It does not break the SPA mode, once you've SSR'ed and hydrated, the app stays as SPA. You can notice the fact that you don't have any "page reload" because you're doing a vue-router client-side navigation (as to oppose to a regular a link navigation).
The rest of the app will still be SSR'ed properly, you will not go back to the server meanwhile (as to oppose to Next.js).
They are several ways to debug all of this thanks to some devtools + trying to disable the JS/inspect the source code. But yeah, crawlers will properly parse the SSR'ed content.
PS: you can also choose what kind of rendering mode you want for a given page so if you want some to be SPA-only, you can. The rest could be SSR'ed (during initial render) + SPA after the hydration.
SSR or ISG will not make any impact regarding crawlers/SEO.
It works differently from SSR of course, but the rendering part will be the same regarding the markup. Only the "freshness" will be different.
I've updated some scripts within our TypeScript files, but users still seem to get the old versions of the bundled script. How do I know? Because I'm seeing (remote) JavaScript errors in our logs that refer to a class that 100% exists within the new latest version.
It has been more then 6 hours ago and I can verify that the ?={hash} has changed.
I have the feeling the suspects are, one way or in combination any of these:
Static content (IIS)
Scripts.Render("")
TypeScript compilation
Azure
I have the feeling it's in some kind of cache or that it has to do with static content serving. Or with the Script.Render().
What can the reason be? It drives me crazy, because I know the site is broken for some users, but I can't get a fix out.
I've included the headers below.
This is the code for the bundle. Note that core.js is being generated by TypeScript.
{
var bundle = new Bundle("~/scripts/main-bundle");
bundle.IncludeDirectory("~/scripts/fork", "*.js", searchSubdirectories : true );
bundle.IncludeDirectory("~/scripts/app", "*.js", searchSubdirectories : true );
bundle.Include("~/scripts/bundles/core.js");
bundles.Add( bundle );
}
Update
If they are getting the old HTML because it might be cached, so the hashbomb doesn’t change - shouldn’t they still have a consistent pair of JS and HTML?
https://poules.com/us doesn't have a cache-control:max-age specified so you probably have some users on browser cached html which would use the old scripts.
There is a s-maxage set, but that is only an override for public cache max-age which isn't set and this page cache is set to private; so I don't think it is doing anything.
Also, you can check Azure to see what is deployed if you have access to the Azure portal.
best way i find around any web interface that requires a forcing of update client is to use a manifest file, and have that specify the scripts. Your bundle needs to then update the manifest with the correct hash. I normally just use a grunt task for this and placeholders within the manifest.
You then manage the manifest in code. with listeners for updateReady and completed so you know when to refresh the browser.
On another note, Application Cache is for older browsers, for new browsers you have to use the Service worker in order to provide a facility to update your app.
With both applied you will have a release schedule you can manage and mitigate issues like you have been getting at the moment.
Another method if you have an API, is to allow the API to serve up Javascript, based on a given version number for the users current web front end. This way you can then inform the users that a more recent version is up to date, and offer a reload button.
However Application cache through manifest or service workers, is more accessible to other teams, if you have a split, front end and backend setup.
+++++++++++++++++++++
Another reason why, could be because your web font is blocked by ad blockers like Ghostery and AdGuard. This in turn creates a unhandled error
auth-dialog-window?openerOrigin=https%3a%2f%2fpoules.com&color=FF533C&pool=&openerType=iframe:82 Uncaught ReferenceError: WebFont is not defined
at auth-dialog-window?openerOrigin=https%3a%2f%2fpoules.com&color=FF533C&pool=&openerType=iframe:82
This could potentially stop other things from working and loading in the right way. You need to make sure that you are picking up and catching all errors and events for all of the users, no matter what plugins they use.
To be fair, on looking it could very well be the adBlockers that are your main issue here.
Anyway hope this all helps
Even if the hashbomb stays the same, the asset/bundle files do expire at their own pace but only after a day; at the moment of writing I see the following dates
Date: Fri, 25 May 2018 17:11:21 GMT
Expires: Sat, 25 May 2019 17:11:22 GMT
If you update your main-bundle at Fri 18:00:00, it might not be used until after Sat 17:11:22.
I also notice you are mixing public and private cache locations; by caching the bundles public, any proxy server (if one is involved) maintains its own cache by which the browser might get served, which might also result in a delay.
Is it an option to not cache your webpages so that the hashbomb is always up to date?
I also usually server my assets without caching but with the Last-Modified header,
doing so ensures that the browers at leasts makes a request for each asset providing the If-Modified-Since header, upon which IIS responds with te 304 status code if there's no change (so that no bytes go over the wire).
I have a rails application hosted on Heroku. Recently we switched to use a CDN (cloudfront) for our assets. Although we faced problems with CORS, we were able to get it all sorted out with the font_assets gem and proper tuning of cloudfront.
We are having a few select users (two reported so far, each on latest chrome 40+) not downloading the assets and therefore having no styling of the page.
Why would this happen to select users? Some things I've noticed that might help:
There are a lot of 3xx responses recorded in cloudfront (more than
2xx)
Around the time that the user loaded an unstyled page, a 304 response was returned for the css file.
After deploying a new version of a website the browser loads everything from its cache from the old webpage until a force refresh is done. Images are old, cookies are old, and some AJAX parts are not working.
How should I proceed to serve the users with the latest version of the page after deploy?
The webpage is an ASP.Net webpage using IIS7+.
You can append a variable to the end of each of your resources that changes with each deploy. For example you can name your stylesheets:
styles.css?id=1
with the id changing each time.
This will force the browser to download the new version as it cannot find it in its cache.
For ASP.NET you can use the cache control and expires headers. You can also set up similar headers in IIS 7 for your images. If you have any other cookies you can expire them manually.
I have not tried it, but it looks like you can do an ever better job of bulk setting cache control in IIS 7. See this thread and this link. At that point you are only left with unsetting any custom cookies you have (which you won't be able to control with HTTP cache control settings).
I don't know of any method to "unset everything all at once" easily.
You could use http headers to control the cache of your clients.
I'll just leave this here for you. http://support.microsoft.com/kb/234067