How can I load an i18n json from a CMS with i18next? - next.js

I am using https://github.com/i18next/next-i18next and rather than hardcoding the files, I want to use HygraphCMS (formerly GraphCMS) to provide those strings. What's the best way to approach this?
If I wait for an async request, it'll slow things down. Thanks in advance!

I pumped into similar issue before, what I did was I created a script that runs before the dev and build commands, something like:
// ...
"scripts": {
// ....
"trans": "node ./scripts/get-i18n.js",
"dev":"npm run trans && next dev",
"build":"npm run trans && next build"
}
// ...
And you write a script get-i18n.js to fetch you translations from the CMS and save them in the directory you chosed in the i18n setup.
The downside of this approach that if the translation changed in the CMS you would need to restart the server everytime, or run the script manually in another shell to make the fetch and update the strings.

The best solution here would depend on how you have the rest of the Next.js project set up and how much you want to avoid an async request. I've described two approaches I would consider below. Both examples assume you're using Next.js and hosting on Vercel, but it should be possible and similar through other platforms.
1. Build script to store locally (without async request)
Start by writing a script to fetch all the translations and store them locally in the project (just as Aldabil21 said).
After that, you can create a deploy webhook and call it from your CMS whenever a change is made; this will ensure that the translations are always kept up-to-date. An issue with this could be that the build runs too often, so you may want to add some conditions here to prevent that, such as only calling the webhook from the CMS when the translations content changes.
2. Using incremental static regeneration (with async request)
Of course, if you're using incremental static regeneration, you might reconsider fetching the translations using getStaticProps as the request is not made for each visitor.
The result of the request to the CMS's translation collection would be cached on Vercel's Edge network and then shared amongst each visitor, so only the first request after the cache has expired will trigger the full request. The maximum age for the caching of static files is 31 days, so this delay when requesting fresh data could be infrequent enough to be acceptable. Note that you have to enable this manually by setting the revalidate prop in the return object of getStaticProps.
You could even mitigate this request further (depending on your project setup) by querying only for the language that is currently used, and querying a new language client-side only when it is requested by the visitor (or possibly once the page is idle or the language switcher is opened). If you have many languages, this reduction will reduce the download time substantially.
If you do go the getStaticProps route and are using Next.js >12.2.0 then you could also create a CMS webhook to call the on-demand revalidation endpoint whenever a page is updated, which will cause the fresh translations to be stored in cache before a user gets a chance to request it, removing the delay for all users. Or you could use the webhook in the same way as mentioned in 1 and trigger a new build (with the new translations) every time the translation collection is updated.

Related

What is the difference between getStaticProps() and getServerSideProps() in nextjs

What is the difference between getStaticProps() and getServerSideProps() in nextjs ?
GetStaticProps
Use inside a page to fetch data at build time.
This data will be part of your build. If data changed since the build, you wouldn't see it until you build again.
Good if you only need to update that data once in a while, manually on each deployment.
When you use getStaticProps you get the fastest performance
Can potentially deliver stale data.
Data is rendered before it gets to the client, server-side.
GetServerSideProps
Use it to fetch data for every instance that a user issues a request to the page
Fetches on every client request, before sending the page to the client.
Data is refreshed every time the user loads the page
Use cases:
For example, if you are fetching all the countries available in the world, makes sense to use getStaticProps. But if you need to retrieve user data, you should use getServerSideProps.
GetStaticSideProps
It is a special function that tells Next JS to populate props and render the page that it is exported from into a static HTML page at build time.
Fetch request is made only at build time.
Data Integrity is low to none
SEO friendly
Instant performance
Slow build time
GetServerSideProps
A special Next JS function that tells the Next component to populate the props and render into a static HTML page at run time.
Fetch request is made at every page request.
Data Integrity is High
SEO friendly
Loads before the render
Fast build time
the biggest difference is this
for using getStaticProps and getStaticPaths, you must create all pages of your site, after that, you can deploy site on server. it means that after you deployed, you can't add or edit a page of your site. for add or update, you should edit project, then deploy this again.
but with getserversideprops, you can edit or add or do any thing on your site.
for example, you can't design a weblog site with getStaticProps. because of you want to add a post any week. but with getserversideprops, you can do it.
after this difference, this is important. speed of getStaticProps is more than getserversideprops.

When to use getStaticProps and getServerSide props in real world scenario

Hello I am new to the Next.js, I know that in getStaticProps Next.js will pre-render this page at build time and in getServerSideProps Next.js will pre-render this page on each request using the data returned by getServerSideProps
But i want a example of when to use getStaticProps and getServerSideProps for website
With getServerSideProps (SSR) data is fetched at request time, so your page will have a higher Time to first byte (TTFB), but will always pre-render pages with fresh data.(can be use for dynamic content/it allows you to improve your SEO as in this method the data is rendered before it reaches the client.)
With Static Generation (SSG) The HTML is generated at build time and will be reused on each request, TTFB is slower and the page is usually faster, but you need to rebuild your app every time the data is updated (can be acceptable for a blog, but not for an e-commerce).
With Incremental Static Regeneration (ISG) static content can also be dynamic, the page will be rebuilt in the background with an interval-based HTTP request. You can specify how often pages are updated with a revalidate key inside getStaticProps, this works great with fallback : true and allows you to have (almost) always updated content.
When to use:
getStaticProps: Any data that changes infrequently, particularly from a CMS. (Must be used with getStaticPaths if there's a dynamic route).
revalidate: An easy add-on to getStaticProps if the data might change, and we're OK serving a cached version.
getServerSideProps: Primarily useful with data that must be fetched on the server that changes frequently or depends on user authentication.When we want to fetch data that relates to the user's cookies/activity and is consequently not possible to cache.
SSR doesn’t cache any data. It fetches new data on every request which often results in a slower performance.
SSR should be used when we don't know what the user wants, otherwise we use SSG or ISG in case of dynamic content.
Here is some examples to what to use in each case:
getServerSideProps (SSR):
a JWT after a successful login
GeoLocation of the user (the content on the page may depend on the geo location of the client, so it's very useful to use SSR in this case)
Static Generation (SSG):
wiki page
Privacy-policy page
a blog if data is changed very often
Website settings (Colors, themes, ...)
Incremental Static Regeneration (ISG):
e-commerce store
news website
The data revalidation will happen on the server and will benefit all
visitors.
Client-side rendering (CSR):
Content that is only accessible for authenticated users (dashboards)
The data revalidation will happen on the client and will only benefit that single user.
SWR/ReactQuery + Incremental Static Regeneration (SWR + ISG):
this approach is also a very good one if you want instantly updated data for the current user and statically regenerated for the next visitors.
NextJs v12.2.0 introduced the "On-Demand Revalidation" which is very powerful and useful.Let's say you have a news website, using the old Incremental Static Regeneration is not the best solution, imagine we set the revalidation to 1 hour, that means the newest urgent news that we just published will not be published on the website until one hour, too bad :( and here where On-Demand Revalidation comes to play. When you publish your new article you will call the "revalidate()" method using an API and the articles page will be regenrated without waiting the revalidation time.

What is the difference between fallback false vs true vs blocking of getStaticPaths with and without revalidate in Next.js SSR/ISR?

As of Next.js 10, the getStaticPaths function returns an object that must contain the very important fallback key as documented at: https://nextjs.org/docs/basic-features/data-fetching#the-fallback-key-required
While the documentation is precise, it is quite hard to digest for someone that is just beginning with Next.js, could someone try to provide a simpler or more concrete overview of those options?
How to test
First of all, when testing things out to make sure I had understood them, I was getting really confused because when you run in development mode (next dev) the behavior is quite different than when running in production mode (next build && next start), as it is much more forgiving to help you develop quickly. Notably, in development, getStaticPaths gets called on every render , so everything always gets rendered to their latest version, which is unlike production where more caching might be enabled.
The docs describe the production behavior, so to test things out, you really need to use production mode.
The next issue is that I couldn't easily find an example where you can create and update pages from inside the example itself to easily view their behavior. I finally ended up doing that at: https://github.com/cirosantilli/node-express-sequelize-nextjs-realworld-example-app while porting the awesome Realworld example project, which produces a simple multiuser blog website (mini Medium clone).
With those tools in hand, I was able to confirm what the docs say. This answer was tested at this commit which has Next.js 10.2.2.
fallback: false
This one is simple: only pages that are generated during next build (i.e. returned from the paths property of getStaticPaths) will be visible.
E.g., if a user creates a new blog page at /post/[post-id], it will not be immediately visible afterwards, and visiting that URL will lead to a 404.
That new post will only become visible if you re-run next build, and getStaticPaths returns that page under paths, which is the case for the typical use case where getStaticPaths returns all the possible [post-id].
fallback: true
With this option, Next checks if the page has been pre-rendered to HTML under .next/server/pages.
If it has not:
Next first quickly returns a dummy pre-render with empty data that had been created at build time.
In this, you are expected to tell the user that the page is loading.
You must handle that case, or else it could lead to exceptions being thrown due to missing properties.
The way to handle this is described in the docs by checking router.isFallback:
import { useRouter } from 'next/router'
function Post({ post }) {
const router = useRouter()
// If the page is not yet generated, this will be displayed
// initially until getStaticProps() finishes running
if (router.isFallback) {
return <div>Loading...</div>
}
// Render post...
if (router.isFallback) {
return <div>Loading...</div>
}
return <div>post.body</div>
}
So in this example, if we hadn't done the router.isFallback check, post would be {}, and doing post.body would throw an exception
After the actual page finishes rendering for the first time with data (the data is fetched with getStaticProps at runtime), the user's browser gets automatically updated to see it, and it stores the resulting HTML under .next/server/pages
If the page is present under .next/server/pages however, either because:
it was rendered by next build
it was rendered for the first time at runtime
Next.js just returns it, without rendering again.
Therefore, If you edit the post, it will not re-render the page cache. The outdated page will be returned at all times, because it is already present under .next/server/pages, so next does not re-render it.
You will have to re-run next build to see updated versions of the pages.
Therefore, this is not what you generally want for the multi-user blog described above. This approach is generally only suitable for websites that don't have user-generated content, e.g. an e-commerce website where you control all the content.
fallback: true: what about pages that don't exist?
If the user accesses a page that does not exist like /post/i-dont-exist, Next.js will try to render it just like any other page, because it checks that it is not in .next/server/pages thinks that it just hasn't been rendered before.
This is unlike fallback: false, where Next.js never generates new pages at runtime, and just returns a 404 direction.
In this case, your code will notice that the page does not exist when getStaticProps queries the database, and then you tell Next.js that this is a 404 with notFound: true as mentioned at: How to return a 404 Not Found page and HTTP status when an invalid parameter of a dynamic route is passed in Next.js? so Next.js renders a 404 page and caches nothing.
fallback: 'blocking'
This is quite similar to fallback: true, except that it does not return the dummy loading page when a page that hasn't been cached is hit for the first time
Instead, it just makes the browser hang, until the page is rendered for the first time.
Future requests to that page are quickly served from the cache however, just like fallback: true.
https://dev.to/tomdohnal/blocking-fallback-for-getstaticpaths-new-next-js-10-feature-1727 mentions the rationale for this, it appears to break certain rather specific features, and is generally not what you want unless you need one of those specific features.
Note that Next.js documentation explicitly states that in fallback: true, it detects crawlers (TODO how exactly? User agent or something else? Which user agents), and does not return the loading page to crawlers, which would defeat the purpose of SSR. https://nextjs.org/docs/basic-features/data-fetching#the-fallback-key-required mentions:
Note: this "fallback" version will not be served for crawlers like Google and instead will render the path in blocking mode.
so there doesn't seem to be a huge advantage for SEO purposes in using 'blocking' over true.
However, if your user is a security freak and disables JavaScript, they will only see the loading page. And are you sure the Wayback machine won't show the loading page? What about wget? Since I like such use cases, I'm tempted to just use fallback: 'blocking' everywhere.
revalidate: Incremental Static Regeneration (ISR)
When revalidate is given, new requests to a page that is in the .next/server/pages cache also make the cache be regenerated. This is called "Incremental Static Regeneration".
revalidate: n means that our server will do at most 1 re-render every n seconds. If a second request comes in before the n seconds, the previously rendered page is returned and a new re-render is not triggered. So large n means users see more outdated pages, but less server workload.
A large re validate could therefore help the server handle large traffic peaks by caching the reply.
This is what we have to use if we want website users to both publish and update their own posts:
either fallback: true or fallback: 'blocking'
together with revalidate: <integer>
revalidate does not make much sense with fallback: false.
When revalidate: <number> is given, behavior is as follows:
if the page is present under .next/server/pages, return this prerendered immediately, possibly rendered with outdated data.
At the same, also kickstart a page rebuild with the newest data.
When the rebuild is finished, the target page won't be automatically updated to the latest version. The user would have to refresh the page to see the updated version.
otherwise, if the page is not cached, do the same that true or 'blocking' would do, by either returning a dummy wait page, or blocking until it gets done, and create the cached page
After a page is built by either of the above cases (first time or not), if it gets accessed again in the next number seconds, do not trigger rebuilds. This way, if a very large number of users is visiting the website, most of the requests won't require expensive server render work: we will do at most one re-render every number seconds.
Explicit invalidation with: res.unstable_revalidate
This is currently beta, but it seems that at last they are introducing a way to explicitly invalidate pages as currently mentioned at: https://vercel.com/docs/concepts/next.js/incremental-static-regeneration
This way, instead of ugly revalidate timeouts, we will be able to just rebuild pages only when needed if we are able to detect the page becoming outdated on the server. Then we can just sever directly every time.
It will presumably be renamed to res.revalidate once it becomes stable.
This awesome development was brought to my attention by Sebastian in the comments.
SSR for a single request (i.e. ignore revalidate) so that users can see the results of their blog page edits
Edit: this use case might best resolved with the upcoming res.unstable_revalidate/res.revalidate.
If for example:
the blog author clicks submit after updating an existing post
and they got redirected to the post view page as is usual behavior, to see if everything looks OK
they first see the outdated version of the post. Then redirected this visit would trigger a rebuilt with the new data they've provided in the edit page, and only after that finishes and the user refreshes they would see the updated page.
So this behavior is also not ideal UI behavior for the editor, as the user would be left thinking:
What just happened, was my edit not registered?
for a few seconds.
This can be solved with "preview mode" which is documented at: https://nextjs.org/docs/advanced-features/preview-mode It was added in Next.js 12. Preview mode checks if come cookies are set, and if so makes getStaticProps rerun regardless of revalidate, just like getServerSideProps.
However, even preview mode does not solve this use case super nicely, because it does not invalidate/update the cache, which is a widely requested thing, related:
Next.js ISR ( Incremental Static Regeneration ), how to rebuild or update a specific page manually or dynamically before the interval/ISR time start?
How to clear/delete cache in NextJs?
so it could still happen that the user visits the page without cache and sees the outdated page. I could work around this by removing the cookies and making a an extra GET request, but this produces an useless get request and adds more complexity.
I learned about this after opening an issue about it at: https://github.com/vercel/next.js/discussions/25677 thanks to #sergioengineer for pointing it out.
Related threads:
https://github.com/vercel/next.js/discussions/11698#discussioncomment-351289
https://github.com/vercel/next.js/discussions/11552
SSR vs ISR: per user-login-based information
ISR is an optimization over SSR. However, like every optimization, it can increase the complexity of the system.
For example, suppose that users can "favorite" blog posts.
If we use ISR, it only makes much sense to pre-render a logged off page, because it only makes sense to pre-render the stuff that is common for multiple users.
Therefore, if we want to show to the user the information:
Have I starred this page yet or not?
then we have to do a second API request and then update the page state with it.
While it may sound simple, this adds considerable extra complexity to the code in my experience.
With SSR however, we could simply check the login cookies sent by the user as usual, and fully render the page perfectly customized to the current user on the server, so that no further API requests will be needed. Much simpler.
So you should really only do it if you benchmark it and it is worth it.
Here's an example of checking login cookies: https://github.com/cirosantilli/node-express-sequelize-nextjs-realworld-example-app/blob/8dff36e4bcf659fd048e13f246d50c776fff0028/back/IndexPage.ts#L23 That sample setup uses the exact same SWR tokens that are being used to make JavaScript API requests but also via cookies. We don't have to worry about XSS in that demo because we only use login on GET requests. All modifying requests like POST are done from JavaScript exclusively, and don't authenticate from cookies.
The ISR dream: infinite revalidate + explicit invalidation + CDN hooks
As of Next.js 12, ISR is wonky for such a CRUD website, what I would really want is for things to work as follows:
when the user creates a blog post, we use a post creation hook to upload the result to a CDN of choice
when a user views a blog post, it goes to the CDN directly and does not touch the server. Only if the user wants to fetch user-specific data such as "have I starred this page" does it make a small API request to the server
when a user updates a blog post, it just updates the result on the CDN of choice
This approach would really lead to ultra-fast page loads and minimal server workload Nirvana.
I think Vercel, the company behind Next.js, might such a CDN system running on their product, but I don't see how to nicely use an arbitrary CDN of choice, because I don't see such hooks. I hope I'm wrong :-)
But just the explicit invalidation + infinite revalidate would already be a great thing to have even without the CDN hook system. Edit: this might be coming with res.unstable_revalidate, see section above.
whenever we want to implement ISR or SSG techniques in dynamic routes , we are supposed to pass the paths,that we want to be statically generated at the build time,
to getStaticPaths function .Although , in some situations we might have new paths that are not returned by getStaticPaths and we have to handle this paths with fallback property that is also returned from getStaticPathsNext.js official docs
.
fallback property can accept 3 values:
false :
new paths will result in a 404 page
true :
new path will be statically generated (getStaticProps is called) - loading state is shown while generating page(via router.isFallback and showing fallback page) - page is rendered with required props after generating - new path will be cached in CDN (later requests will result in cached page) - crawler Bots may index fallback page (not good for Seo)
"blocking" :
new path will be waiting for HTML to be generated (via SSR ) - there will be no loading state(no fallback page) - new path will be cached in CDN (later requests will result in cached page)
NOTE : after Next.js 12 the fallback:true in ISR technique wont be showing fallback page to crawler Bots Read more
When creating dynamic pages in our app (for example a video app), we need to configure how next.js will fallback during the request.
If we know that the pages and our app system are quick, we are sure that our data response will be instant, we can use fallback:blocking. We do not need to show the loading state because Next.js will wait for this page to be fully pre-generated on the server before it serves that.
in fallback:false if a new page is not found 404 page will be displayed. false is used if you want to generate ALL of our dynamic paths during build time. In this case, in getStaticPath you need to fetch how many items you have in your database. Since pre-built pages are served from CDN, its look-up time is pretty quick, you are actually not fetching data, so you do not need a "loading state". You are just checking if the given URL path has a pre-generated page or not. If in the future you need to add more paths, you need to rebuild your app.
In your video app, you might have too many videos so you only prebuild the most popular video pages. If a user visits a video and its page was not pre-generated, you have to do data fetching so you need a "loading" state. Now you need to set the fallback:true. Since data fetching will take time, if you do not show a different component while loading, you might get like "Cannot read property "title" of undefined", since at that moment title of the video is not defined yet.
function Video({ videoId }) {
const router = useRouter()
// If the page is getting generated
if (router.isFallback) {
return <div>Loading...</div>
}
// then return the main component
}

API caching with Symfony2

How can I cache my API responses built with Symfony?
I started to dig into FosCacheBundle and the SymfonyHttpCache, but I'm not sure about my usecase.
You can only access the API with a token in header and every users get the same data in their response for the same URL called (and with the same GET parameters).
I would like to have a cache entry for each of my URL (including get parameters)
and also, is it possible to reorder my GET parameters before the request is processed by my cache system (so that the system dont create multiple cache entries for "URL?foo=bar&foz=baz" and "URL?foz=baz&foo=bar" which returns the same data)
Well there are multiple ways.
But the simplest is this:
If the biggest problem is database access than just caching the compiled result in memcache or similar will go a long way. Plus, this way you stick to your already working authentication.
In your current controller action, after authentication and before payload creation check if there's an entry in memchache. If no, build the payload and save it into memcache than return it. When next request comes along there will be no DB access as it will be returned from memcache. Just don't forget to refresh the cache how ever often you need.
Note:
"Early optimization is the root of all evil" and "Servers are cheaper than programmer hours" are to things to keep in mind. Don't complicate your life with really advanced caching methods if you don't need to.

Synchronizing local cache with external application

I have two separate web applications:
The "admin" application where data is created and updated
The "public" application where data is displayed.
The information displayed on the "public" changes infrequently, so I want to cache it.
What I'm looking for is the "simplest possible thing" to update the cache on the public site when a change is made in the admin site.
To throw in some complexity, the application is running on Windows Azure. This rules out file and sql cache dependencies (at least the built in ones).
I am running both applications on a single web role instance.
I've considered using Memcached for this purpose. but since I'm not really after a distributed cache and that the performance is not as good as using a memory cache (System.Runtime.Caching) I want to try and avoid this.
I've also considered using NServiceBus (or the Azure equivalent) but again, this seems overkill just to send a notification to clear the cache.
What I'm thinking (maybe a little hacky, but simple):
Have a controller action on the public site that clears the in memory cache. I'm not bothered about clearing specific cached items, the data doesn't change enough for me to worry about that. When the "admin" application makes a cache, we make a httpwebrequest to the clear cache action on the public site.
Since the database is the only shared resource between the two applications, just adding a table with the datetime of the last update. The public site will make a query on every request and compare the database last update datetime to one that we will hold in memory. If it doesn't match then we clear the cache.
Any other recommendations or problems with the above options? The key thing here is simple and high performance.
1., where you have a controller action to clear the cache, won't work if you have more than one instance; otherwise, if you know you have one and only one instance, it should work just fine.
2., where you have a table that stores the last update time, would work fine for multiple instances but incurs the cost of a SQL database query per request -- and for a heavily loaded site this can be an issue.
Probably fastest and simplest is to use option 2 but store the last update time in table storage rather than a SQL database. Reads to table storage are very fast -- under the covers it's a simple HTTP GET.
Having a public controller that you can call to tell the site to clear its cache will work as long as you only have one instance of the main site. As soon as you add a second instance, as calls go through the load balancer, your one call will only go to one instance.
If you're not concerned about how soon the update makes it from the admin site to the main site, the best performing and easiest (but not the cheapest) solution is to use the Azure AppFabric Cache and then configure it to use a a local (in memory) cache with a short-ish time out (say 10 minutes).
The first time your client tries to access an item this would be what happens
Look for the item in local cache
It's not there, so look for the item in the distributed cache
It's not there either so load the item from persistent storage
Add the item to the cache with a long-ish time to live (48 hours is the default I think)
Return the item
Steps 1 and 2 are taken care of for you by the library, the other bits you need to write. Any subsequent calls in the next X minutes will return the item from the in memory cache. After X minutes it falls out of the local cache. The next call loads it from the distributed cache back into the local cache and you can carry on.
All your admin app needs to do is update the database and then remove the item from the distributed cache. The next time the item falls out of the local cache on the client, it will simply reload the data from the database.
If you like this idea but don't want the expense of using the caching service, you could do something very similar with your database idea. Keep the cached data in a static variable and just check for updates every x minutes rather than with every request.
In the end I used Azure Blobs as cache dependencies. I created a file change monitor to poll for changes to the files (full details at http://ben.onfabrik.com/posts/monitoring-files-in-azure-blob-storage).
When a change is made in the admin application I update the blob. When the file change monitor detects the change we clear the local cache.

Resources