How to Keep a puppeteer instance in NextJS server - next.js

I'm using puppeteer in a NextJS server module and Only one instance of the puppeteer Browser (also with login information and some states from some puppeteer pages) is needed. I use globle.myBrowser to keep this instance.(because I don't need a database, so I can't store it in a database), if I don't keep it with globle.myBrowser, NextJs will load this module more than once, so I can't keep the state.
It's all good until when I modify this module, hot reload will not work, because global.myBrowser will still point to the old module and new module will not working.
My questions are:
Is there any other better way to keep this instance of puppeteer?
if using globle to track the instance, How can I detect hot reload? I using module.hot.accept but get: "accept is undefined.

Related

How can I load an i18n json from a CMS with i18next?

I am using https://github.com/i18next/next-i18next and rather than hardcoding the files, I want to use HygraphCMS (formerly GraphCMS) to provide those strings. What's the best way to approach this?
If I wait for an async request, it'll slow things down. Thanks in advance!
I pumped into similar issue before, what I did was I created a script that runs before the dev and build commands, something like:
// ...
"scripts": {
// ....
"trans": "node ./scripts/get-i18n.js",
"dev":"npm run trans && next dev",
"build":"npm run trans && next build"
}
// ...
And you write a script get-i18n.js to fetch you translations from the CMS and save them in the directory you chosed in the i18n setup.
The downside of this approach that if the translation changed in the CMS you would need to restart the server everytime, or run the script manually in another shell to make the fetch and update the strings.
The best solution here would depend on how you have the rest of the Next.js project set up and how much you want to avoid an async request. I've described two approaches I would consider below. Both examples assume you're using Next.js and hosting on Vercel, but it should be possible and similar through other platforms.
1. Build script to store locally (without async request)
Start by writing a script to fetch all the translations and store them locally in the project (just as Aldabil21 said).
After that, you can create a deploy webhook and call it from your CMS whenever a change is made; this will ensure that the translations are always kept up-to-date. An issue with this could be that the build runs too often, so you may want to add some conditions here to prevent that, such as only calling the webhook from the CMS when the translations content changes.
2. Using incremental static regeneration (with async request)
Of course, if you're using incremental static regeneration, you might reconsider fetching the translations using getStaticProps as the request is not made for each visitor.
The result of the request to the CMS's translation collection would be cached on Vercel's Edge network and then shared amongst each visitor, so only the first request after the cache has expired will trigger the full request. The maximum age for the caching of static files is 31 days, so this delay when requesting fresh data could be infrequent enough to be acceptable. Note that you have to enable this manually by setting the revalidate prop in the return object of getStaticProps.
You could even mitigate this request further (depending on your project setup) by querying only for the language that is currently used, and querying a new language client-side only when it is requested by the visitor (or possibly once the page is idle or the language switcher is opened). If you have many languages, this reduction will reduce the download time substantially.
If you do go the getStaticProps route and are using Next.js >12.2.0 then you could also create a CMS webhook to call the on-demand revalidation endpoint whenever a page is updated, which will cause the fresh translations to be stored in cache before a user gets a chance to request it, removing the delay for all users. Or you could use the webhook in the same way as mentioned in 1 and trigger a new build (with the new translations) every time the translation collection is updated.

What is the difference between getStaticProps() and getServerSideProps() in nextjs

What is the difference between getStaticProps() and getServerSideProps() in nextjs ?
GetStaticProps
Use inside a page to fetch data at build time.
This data will be part of your build. If data changed since the build, you wouldn't see it until you build again.
Good if you only need to update that data once in a while, manually on each deployment.
When you use getStaticProps you get the fastest performance
Can potentially deliver stale data.
Data is rendered before it gets to the client, server-side.
GetServerSideProps
Use it to fetch data for every instance that a user issues a request to the page
Fetches on every client request, before sending the page to the client.
Data is refreshed every time the user loads the page
Use cases:
For example, if you are fetching all the countries available in the world, makes sense to use getStaticProps. But if you need to retrieve user data, you should use getServerSideProps.
GetStaticSideProps
It is a special function that tells Next JS to populate props and render the page that it is exported from into a static HTML page at build time.
Fetch request is made only at build time.
Data Integrity is low to none
SEO friendly
Instant performance
Slow build time
GetServerSideProps
A special Next JS function that tells the Next component to populate the props and render into a static HTML page at run time.
Fetch request is made at every page request.
Data Integrity is High
SEO friendly
Loads before the render
Fast build time
the biggest difference is this
for using getStaticProps and getStaticPaths, you must create all pages of your site, after that, you can deploy site on server. it means that after you deployed, you can't add or edit a page of your site. for add or update, you should edit project, then deploy this again.
but with getserversideprops, you can edit or add or do any thing on your site.
for example, you can't design a weblog site with getStaticProps. because of you want to add a post any week. but with getserversideprops, you can do it.
after this difference, this is important. speed of getStaticProps is more than getserversideprops.

How to avoid repeated API call across an entire NextJS application?

This seems like a really simple thing to do, yet I am having trouble finding the right architecture to do this.
Here's the scenario:
We have an API route api/templates that should, in theory, happen in every single route/page of the App. It fetches all the different templates and all the data in the app belongs to one of those templates. These are dynamic and can change over time, so they are not an 'importable JSON'
Every page should get these assets on load, but...
once it's loaded, and you start navigating through pages, the app should NOT re-fetch them on every single page navigation
We will implement a socket notification to alert an already-loaded client when templates change in the database
The problem is that, since this is needed on every page, SSR still needs to be able to access this on every page and our SEO policy requires server side rendering to send these pages fully rendered to client.
So, what we are looking for is:
to have a somewhat 'conditional' getServerSideProps that, if it is a full reload, it fetches that, but, if it is already in the client's memory, it skips that
we have looked into SWR, which, in theory, would work, but it still makes the API call as an after-thought, helping on the client side, but defeating the objective of not actually making the call, so that the backend is not 'burdened' with an unnecessary call
Honestly, this looks like a very 'common' pattern, yet I have completely failed to achieve a proper solution within the NextJS app environment. Maybe it's an "anti-pattern" and we shouldn't be doing this?

Spartacus state transfer implementation

I installed SSR to Spartacus application with schematics command and run the application in SSR mode.
In my Chrome dev tools, I can see that content of my homepage was successfully generated by the SSR server.
However, I can see in my network tab, that the XHR requests are still sent and data is fetched with http calls instead of using data fetched by express.
I tried to use config from https://sap.github.io/spartacus-docs/ssr-transfer-state/#page-title in my app.module.ts providers with provideConfig (I also tried it with Config.Module.withConfig in app.module.ts imports as well) file but it didn't work, as CMS and products requests were still sent from client.
At the end of my rendered HTML I can see that state data is linked in key value format to script tag:
<script id="spartacus-state" type="application/json">...</script>
I am wondering if it is possible to use that data from HTML template on the client side and dont't fetch them again with HTTP calls (implementing Angular Universal transfer state in Spartacus app)
I would be very grateful for any advice regarding correct approach (overiding adapters or resolvers or any other solution) for the state transfer implementation in the Spartacus app.
Thank you.
The configured transfer state should work OOTB for some parts of the data, but not for everything. See the source code of cms-store.module.ts, product-store.module.ts and site-context-store.module.ts.
Note that by default the CMS page data is re-loaded in the client, unless you configure RouteLoadStrategy to ONCE in the routing config.
If you find that the Spartacus transfer state is not working correctly, please create a bug ticket in the Spartacus repo and reference it here.

Is there a way to change the MONGO_URL in code?

I'm searching for a way to change the way Meteor loads the Mongo database. Right now, I know I can set an environment variable when I launch Meteor (or export it), but I was hoping there was a way to do this in code. This way, I could dynamically connect to different instances based on conditions.
An example test case would be for the code to parse the url 'testxx.site.com' and then look up a URL based on the 'textxx' subdomain and then connect to that particular instance.
I've tried setting the process.env.MONGO_URL in the server code, but when things execute on the client, it's not picking up the new values.
Any help would be greatly appreciated.
Meteor connects to Mongo right when it starts (using this code), so any changes to process.env.MONGO_URL won't affect the database connection.
It sounds like you are trying to run one Meteor server on several domains and have it connect to several databases at the same time depending on the client's request. This might be possible with traditional server-side scripting languages, but it's not possible with Meteor because the server and database are pretty tightly tied together, and the server basically attaches to one main database when it starts up.
The *.meteor.com hosting is doing something similar to this right now, and in the future Meteor's Galaxy commercial product will allow you to do this - all by starting up separate Meteor servers per subdomain.

Resources