is this Service Worker configuration risky? - next.js

I am working on one e-commerce using Next JS. I am trying to improve the page load speed, this website loads a lot of JS files due to the amount of third parties vendors it has (and can't delete). I am planning to cache some static assets with Service Workers.
I am going to use the library next-offline that uses the workbox-webpack-plugin. This is the configuration I am planning to use:
workboxOpts: {
swDest: '../public/service-worker.js',
maximumFileSizeToCacheInBytes: 20000000,
runtimeCaching: [
{
urlPattern: /https:\/\/fonts\.googleapis\.com\/icon[\w\-_\/\.\:\?\=\&\+]*/,
handler: 'CacheFirst',
options: {
cacheName: 'google-fonts',
expiration: {
maxEntries: 10,
maxAgeSeconds: 30 * 24 * 60 * 60, // 1 month
},
},
},
{
urlPattern: /[\w\-_\/\.:]*\.(jpeg|png|jpg|ico|svg)\??.*/,
handler: 'CacheFirst',
options: {
cacheName: 'cache-img',
expiration: {
maxEntries: 15,
maxAgeSeconds: 2 * 24 * 60 * 60, // 2 days
},
},
},
{
urlPattern: /^((?!monetate).)*\.(js)$/,
handler: 'StaleWhileRevalidate',
},
{
urlPattern: /\/(browse.*|catalogsearch.*)$/,
handler: 'StaleWhileRevalidate',
},
],
},
So, my questions are the following:
do you think that this configuration is risky? Would you change something? I mean, I had several problems in the past with Service Workers caching JS files where I had to set a version for every file to make it work. However, now it seems that workbox fixed this issue.
Should I set a maxAge for the Stale while revalidation strategy? I want to revalidate the data every time the user reload the page. However here it says that it will use the cache without revalidate until the maxAge time is complete. What happens if I don't set in the workbox settings a maxAge?
I am testing this on a Vercel deploy (in a testing environment) and it seems it is working fine and I am thinking to move it to production.
Thanks,

You can use next-offline(i haven't used it much), or you can go ahead with using workbox-webpack-plugin. There is take cares of what file to refetch after caching(uses revision & hash(on url)).
Read this section : https://developers.google.com/web/tools/workbox/modules/workbox-precaching#how_workbox-precaching_works

Related

Why hydration errors happens only in production after few hours of deploying quasar application

I'm running into a weird situation here that debugging is becoming extremely hard to debug.
I have made a post here https://forum.cleavr.io/t/cloudflare-caching-in-a-quasar-app/844 thinking that the problem was about caching.
We are having hydration errors in our webapp codotto.com ONLY after a few hours (or minutes depending on website traffic). As soon as I redeploy the app, everything works well. No hydration errors anymore.
We had the idea that it was caching but we have disabled caching completely in our Cloudflare dashboard:
Then we can verify that the cache is not being used:
Note the CF-Cache-Status set to DYNAMIC (refer to here to see that when you have DYNAMIC set in CF-Cache-Status header it means that cache is not being used).
Locally the app works great and we are not able to reproduce the issue locally and in staging as well. This only happens in production. Here we have configurations for pm2 settings:
production
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxx/codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 7145,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
staging
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "staging.codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxxx/staging.codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 9892,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
We are running out of ideas and this only happens in production, making it extremely hard to debug and we are scoping the problem down to the server configuration.
We understand that this question might be tagged as off-topic since it's not too specific but what I'm looking for are some ideas on things that might help debug this issue. The webapp is done using Quasar framework in SSR mode and we are using https://cleavr.io/ to deploy our application.
We have tried following this guide on Quasar's website to debug hydration errors but haven't gotten us anywhere.
In case you would like to reproduce the bug, you will need to sign up to an account in codotto.com, then visit https://codotto.com so that you are redirected to the dashboard instead of the landing page.
Can anyone here help or explain why we have these hydration errors?
The problem was not related to caching or any other issues we thought it was related.
In one of our bootfiles we had the following:
export default boot(({ router }) => {
router.beforeEach((to, from, next) => {
const requiresAuthentication = to.matched.some(
(record) => record.meta.needsAuthentication
);
const appStore = useAppStore();
if (requiresAuthentication && !appStore.isLoggedIn) {
next({ name: 'auth-login' });
} else {
const onlyGuestCanSee = to.matched.some(
(record) => record.meta.onlyGuestCanSee
);
if (onlyGuestCanSee && appStore.isLoggedIn) {
next({ name: 'dashboard' });
} else {
next();
}
}
});
});
In this file we didn't pass the store to useAppStore causing the request pollution and, consequently, hydration errors. The fix was to pass store to useAppStore:
export default boot(({ router, store }) => {
const appStore = useAppStore(store);
...
})

Problem to generate the client generator component of API Patform with nuxt.js

I have an api-platform project.
https://localhost:8888/api does show the API documentation.
When i want to generate the client generator component with the command:
npx #api-platform/client-generator https://127.0.0.1:8000/api . --generator nuxt
I have this response:
{
api: Api { entrypoint: 'https://127.0.0.1:8000/api', resources: [] },
error: {
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: [Object],
[Symbol(Response internals)]: [Object]
}
},
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: { body: [PassThrough], disturbed: false, error: null },
[Symbol(Response internals)]: {
url: 'https://127.0.0.1:8000/api',
status: 200,
statusText: 'OK',
headers: [Headers],
counter: 0
}
},
status: 200
}
No components have been generated and not sure where to go from there.
I did the setup successfully on my side (until I got a CORS issue probably because I was using https://demo.api-platform.com).
Here is my Github repo for a working client side boilerplate'd Nuxt app that I've got.
Some things are questionable like the usage of Moment.js in 2022, the fact that this is a Nuxt app used as an SPA only (ssr: false), some not up to-date configuration like for nuxt-i18n modules, the components: false, the fact that there are still some ESlint warnings/errors in the various files, the use of an additional vue-i18n package on top of nuxt-i18n but the setup looks kinda okay otherwise.
I mean, at the end it's kinda removing all the nice stuff from Nuxt to have a basic usual Vue.js app as an SPA but yeah, fine let's say.
There is this line in entrypoint.js
export const ENTRYPOINT = 'https://demo.api-platform.com'
Maybe setting this one up to your own local address may work.
If it doesn't, maybe try to host it on Heroku or alike, a real hosted server is maybe needed. Otherwise, it could also be a migration/DB/any-other-backend issue.
It's not an issue on the Nuxt.js side itself at least.

Next-offline and WorkboxOpts: Error: "runtimeCaching" is not a supported parameter. How to enable "runtimeCaching" in next.config.js

I'm just getting started with next-offline and found the section regarding workbox integration and its recipes.
According to the docs:
If you're new to workbox, I'd recommend reading this quick guide --
anything inside of workboxOpts will be passed to
workbox-webpack-plugin.
Define a workboxOpts object in your next.config.js and it will gets
passed to workbox-webpack-plugin. Workbox is what next-offline uses
under the hood to generate the service worker, you can learn more
about it here.
After digging around, I found this great section.
Essentially it gives a suggestion to use two different options:
GenerateSW or InjectManifest
I would like to use the InjectManifest, however when I try to implement that in my next.config.js file. I get this error:
"runtimeCaching" is not a supported parameter.
This is my next.config.js:
const withCSS = require('#zeit/next-css');
const withSass = require('#zeit/next-sass');
const withImages = require('next-images');
const optimizedImages = require('next-optimized-images');
const withOffline = require('next-offline');
module.exports = withOffline(
withImages(
optimizedImages(
withCSS(
withSass({
// useFileSystemPublicRoutes: false,
// generateSw: false, // this allows all your workboxOpts to be passed in injectManifest
generateInDevMode: true,
workboxOpts: {
swDest: './service-worker.js', // this is the important part,
exclude: [/.+error\.js$/, /\.map$/, /\.(?:png|jpg|jpeg|svg)$/],
runtimeCaching: [
{
urlPattern: /\.(?:png|jpg|jpeg|svg)$/,
handler: 'CacheFirst',
options: {
cacheName: 'hillfinder-images'
}
},
{
urlPattern: /^https?.*/,
handler: 'NetworkFirst',
options: {
cacheName: 'hillfinder-https-calls',
networkTimeoutSeconds: 15,
expiration: {
maxEntries: 150,
maxAgeSeconds: 30 * 24 * 60 * 60 // 1 month
},
cacheableResponse: {
statuses: [0, 200]
}
}
}
]
},
dontAutoRegisterSw: false,
env: {
MAPBOX_ACCESS_TOKEN: process.env.MAPBOX_ACCESS_TOKEN,
useFileSystemPublicRoutes: false
},
webpack(config, options) {
config.module.rules.push({
test: /\.(png|jpg|gif|svg|eot|ttf|woff|woff2)$/,
use: {
loader: 'url-loader',
options: {
limit: 100000,
target: 'serverless'
}
}
});
return config;
}
})
)
)
)
);
Also when I check the Application pane, in devTools I see this:
You'll notice what appears to me a duplication of fields i.e. https-calls and hillfinder-https-calls and images and hillfinder-images.
I thought the cacheName field in the options: {} in each was allowing one to include a custom name?
Just wondering if anyone has had experience setting this up?
Thank you in advance!
(These comments apply to the basic Workbox build tools, not specifically to the next-offline wrapper, but I think they're still accurate.)
If you're using InjectManifest mode, the idea is that you write all of your service worker logic, using the underlying pieces of Workbox that you need, following a model that's similar to what's described in the Getting Started guide. You should include a call to precacheAndRoute(self.__WB_MANIFEST) somewhere in your service worker, and then the InjectManifest build tool is responsible for swapping out self.__WB_MANIFEST with an array containing the list of URLs to precache, along with revision information for each URL.
The runtimeCaching parameter is not compatible with InjectManifest. It's a parameter that can be used in GenerateSW mode, in with the Workbox build tool creates an entire service worker for you (including runtime caching routes). The GenerateSW mode takes in a declarative configuration and spits out the code for service worker based on that configuration. If that sounds good—if you'd just like to configure some build options and get a complete service worker as a result—then using GenerateSW is the right choice.

this.user().context is undefined - Jovo Framework - Alexa

I'm currently using Jovo for cross platform developing Alexa and Google Assistant's skills/actions.
I currently hit a roadblock in which I'm trying to get the previous intent by doing either:
this.user().context.prev[0].request.intent or
this.user().getPrevIntent(0).
But it hasn't worked. I get context is undefined and getPrevIntent doesn't exist. According to the Docs, I need to set up a table with DynamoDB (I did, and verified that it's working since Jovo is able to store the user object), and passed in the default configuration to App. But still can't seem to get it work. Any ideas?
const config = {
logging: false,
// Log incoming JSON requests.
// requestLogging: true,
/**
* You don't want AMAZON.YesIntent on Dialogflow, right?
* This will map it for you!
*/
intentMap: {
'AMAZON.YesIntent': 'YesIntent',
'AMAZON.NoIntent': 'NoIntent',
'AMAZON.HelpIntent': 'HelpIntent',
'AMAZON.RepeatIntent': 'RepeatIntent',
'AMAZON.NextIntent': 'NextIntent',
'AMAZON.StartOverIntent': 'StartOverIntent',
'AMAZON.ResumeIntent': 'ContinueIntent',
'AMAZON.CancelIntent': 'CancelIntent',
},
// Configures DynamoDB to persist data
db: {
awsConfig,
type: 'dynamodb',
tableName: 'user-data',
},
userContext: {
prev: {
size: 1,
request: {
intent: true,
state: true,
inputs: true,
timestamp: true,
},
response: {
speech: true,
reprompt: true,
state: true,
},
},
},
};
const app = new App(config);
Thanks 😊
To make use of the User Context Object of the Jovo Framework, you need to have at least v1.2.0 of the jovo-framework.
You can update the package to the latest version like this: npm install jovo-framework --save
(This used to be a comment. Just adding this as an answer so other people see it as well)

grunt-contrib-cssmin takes 10 minutes to complete

I currently have an issue that my grunt-contrib-cssmin task takes nearly 10 minutes to complete. This problem occured after I updated grunt and all plugins to current versions. I did not update them for nearly a year before. Before the update, the cssmin task took arount 5 seconds to complete.
What could be the cause of this increase in run time after updating? My task definition is:
cssmin: {
target: {
options: {
banner: '/*! app.min.css <%= grunt.template.today("dd-mm-yyyy") %> */\n',
filename: 'app.min.' + grunt.template.today("ddmmyyyyHHMMss") + '.css'
},
files: {
'app/css/<%= cssmin.target.options.filename %>': [
'app/lib/bower_components/angular-motion/dist/angular-motion.css',
'app/lib/bower_components/ng-sortable/dist/ng-sortable.min.css',
'app/lib/bower_components/pines-notify/pnotify.core.css',
'app/lib/bower_components/pines-notify/pnotify.picon.css',
'app/lib/bower_components/pines-notify/pnotify.buttons.css',
'app/lib/bower_components/pines-notify/pnotify.history.css',
'app/lib/bower_components/angular-bootstrap-colorpicker/css/colorpicker.css',
'app/lib/bower_components/ng-sortable/dist/ng-sortable.min.css',
'app/lib/patches/ui.dynatree.css',
'app/lib/bower_components/selectize/dist/css/selectize.bootstrap3.css',
'app/lib/bower_components/angular-ui-grid/ui-grid.css',
'app/lib/bower_components/angular-resizable/angular-resizable.min.css',
'app/css/screen.css',
'app/css/ie.css'
]
}
}
},
Found the cause of the issue. grunt-contrib-cssmin uses internally the task clean-css. This task has an option "advanced". This option can also be set for grunt-contrib-cssmin, which delegates it to clean-css.
If the advanced option is set to true, the execution times are around 10 minutes. if it is set to false, my execution times for this task are around 6 seconds.

Resources