Why hydration errors happens only in production after few hours of deploying quasar application - server-side-rendering

I'm running into a weird situation here that debugging is becoming extremely hard to debug.
I have made a post here https://forum.cleavr.io/t/cloudflare-caching-in-a-quasar-app/844 thinking that the problem was about caching.
We are having hydration errors in our webapp codotto.com ONLY after a few hours (or minutes depending on website traffic). As soon as I redeploy the app, everything works well. No hydration errors anymore.
We had the idea that it was caching but we have disabled caching completely in our Cloudflare dashboard:
Then we can verify that the cache is not being used:
Note the CF-Cache-Status set to DYNAMIC (refer to here to see that when you have DYNAMIC set in CF-Cache-Status header it means that cache is not being used).
Locally the app works great and we are not able to reproduce the issue locally and in staging as well. This only happens in production. Here we have configurations for pm2 settings:
production
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxx/codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 7145,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
staging
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "staging.codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxxx/staging.codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 9892,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
We are running out of ideas and this only happens in production, making it extremely hard to debug and we are scoping the problem down to the server configuration.
We understand that this question might be tagged as off-topic since it's not too specific but what I'm looking for are some ideas on things that might help debug this issue. The webapp is done using Quasar framework in SSR mode and we are using https://cleavr.io/ to deploy our application.
We have tried following this guide on Quasar's website to debug hydration errors but haven't gotten us anywhere.
In case you would like to reproduce the bug, you will need to sign up to an account in codotto.com, then visit https://codotto.com so that you are redirected to the dashboard instead of the landing page.
Can anyone here help or explain why we have these hydration errors?

The problem was not related to caching or any other issues we thought it was related.
In one of our bootfiles we had the following:
export default boot(({ router }) => {
router.beforeEach((to, from, next) => {
const requiresAuthentication = to.matched.some(
(record) => record.meta.needsAuthentication
);
const appStore = useAppStore();
if (requiresAuthentication && !appStore.isLoggedIn) {
next({ name: 'auth-login' });
} else {
const onlyGuestCanSee = to.matched.some(
(record) => record.meta.onlyGuestCanSee
);
if (onlyGuestCanSee && appStore.isLoggedIn) {
next({ name: 'dashboard' });
} else {
next();
}
}
});
});
In this file we didn't pass the store to useAppStore causing the request pollution and, consequently, hydration errors. The fix was to pass store to useAppStore:
export default boot(({ router, store }) => {
const appStore = useAppStore(store);
...
})

Related

How to override AWS Amplify Api domain endpoint

I have added a custom domain to the API Gateway due to CORS/Cookies/Infosec and other fun reasons.
I then added the following code to hack the correct domain into my Amplify configuration:
import { Amplify, API } from "aws-amplify"
const myfunc () => {
const amplifyEndpoint = API._restApi._options.endpoints[0]
Amplify.configure({
aws_cloud_logic_custom: [
{
...amplifyEndpoint,
endpoint: process.env.REACT_APP_API_URL || amplifyEndpoint.endpoint,
},
]
})
const response = await API.post("MyApiNameHere", "/some-endpoint", {data:"here"})
}
This works but a) is this really the correct way to do it? and b) I'm seeing a weird issue whereby the first API.post request of the day from a user is missing the authorization & x-amz-security-token headers that I expect Amplify to be magically providing. If a user refreshes the page, the headers are sent and everything works as expected.
[edit] turns out my missing headers issue is unrelated to this override, so still need to get to the bottom of that!
The more correct place looks to be in the index.js:
import { Amplify, API } from "aws-amplify"
import awsExports from "./aws-exports"
Amplify.configure(awsExports)
const amplifyEndpoint = API._restApi._options.endpoints[0]
Amplify.configure({
aws_cloud_logic_custom: [
{
...amplifyEndpoint,
endpoint: process.env.REACT_APP_API_URL || amplifyEndpoint.endpoint,
},
]
})
That way it's done once - rather than needing to be done in each API call.
Naturally you would need some more complicated logic if you were dealing with multiple endpoints.

pino-datadog-transport with Next.js on Vercel

I'm trying to migrate a Next.js project running on Vercel from
"pino-datadog": "2.0.2",
"pino-multi-stream": "6.0.0",
to
"pino": "8.4.2",
"pino-datadog-transport": "1.2.2",
and I copy the setup from the pino-datadog-transport's README.md:
import { LoggerOptions, pino } from 'pino'
const pinoConf: LoggerOptions = {
level: 'trace',
}
const logger = pino(
pinoConf,
pino.transport({
target: 'pino-datadog-transport',
options: {
ddClientConf: {
authMethods: {
apiKeyAuth: process.env.DATADOG_API_KEY,
},
},
ddServerConf: {
site: 'datadoghq.eu',
},
service: process.env.VERCEL_URL
ddsource: 'nodejs',
},
}),
)
and this seems to be working fine locally, but when I publish it on Vercel and run it there I get the following error:
ERROR Error: unable to determine transport target for "pino-datadog-transport"
at fixTarget (/var/task/node_modules/pino/lib/transport.js:136:13)
at Function.transport (/var/task/node_modules/pino/lib/transport.js:110:22)
Am I missing some additional config to get this working? Anyone else running this setup or something similar to get explicit logs working on Vercel with Next.js?
I have enabled the Datadog integration in Vercel as well, but that only forwards Next.js logs, not explicit console.logs or standard Pino logs from what I can tell.
The solution to this problem is to import even though nothing in the import is actually used in the code.
It seems Next.js strips away all code that isn't imported when the code is deployed.
So, adding
import 'pino-datadog-transport'
at the top of the file solves the problem.

Problem to generate the client generator component of API Patform with nuxt.js

I have an api-platform project.
https://localhost:8888/api does show the API documentation.
When i want to generate the client generator component with the command:
npx #api-platform/client-generator https://127.0.0.1:8000/api . --generator nuxt
I have this response:
{
api: Api { entrypoint: 'https://127.0.0.1:8000/api', resources: [] },
error: {
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: [Object],
[Symbol(Response internals)]: [Object]
}
},
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: { body: [PassThrough], disturbed: false, error: null },
[Symbol(Response internals)]: {
url: 'https://127.0.0.1:8000/api',
status: 200,
statusText: 'OK',
headers: [Headers],
counter: 0
}
},
status: 200
}
No components have been generated and not sure where to go from there.
I did the setup successfully on my side (until I got a CORS issue probably because I was using https://demo.api-platform.com).
Here is my Github repo for a working client side boilerplate'd Nuxt app that I've got.
Some things are questionable like the usage of Moment.js in 2022, the fact that this is a Nuxt app used as an SPA only (ssr: false), some not up to-date configuration like for nuxt-i18n modules, the components: false, the fact that there are still some ESlint warnings/errors in the various files, the use of an additional vue-i18n package on top of nuxt-i18n but the setup looks kinda okay otherwise.
I mean, at the end it's kinda removing all the nice stuff from Nuxt to have a basic usual Vue.js app as an SPA but yeah, fine let's say.
There is this line in entrypoint.js
export const ENTRYPOINT = 'https://demo.api-platform.com'
Maybe setting this one up to your own local address may work.
If it doesn't, maybe try to host it on Heroku or alike, a real hosted server is maybe needed. Otherwise, it could also be a migration/DB/any-other-backend issue.
It's not an issue on the Nuxt.js side itself at least.

How to get Next.js app working with Cloudflare Workers - Workers Sites

I am struggling to get an SPA working using Next.js with Cloudflare's Workers Sites. I found a index.js and wrangler file in their document (https://flareact.com/docs/getting-started#quickstart) and I am using it but I am getting the browser error - could not find index.html in your content namespace.
import { handleEvent } from "flareact";
/**
* The DEBUG flag will do two things that help during development:
* 1. we will skip caching on the edge, which makes it easier to
* debug.
* 2. we will return an error message on exception in your Response rather
* than the default 404.html page.
*/
const DEBUG = false;
addEventListener("fetch", (event) => {
try {
event.respondWith(
handleEvent(event, require.context("./pages/", true, /\.(js|jsx|ts|tsx)$/), DEBUG)
);
} catch (e) {
if (DEBUG) {
return event.respondWith(
new Response(e.message || e.toString(), {
status: 500,
})
);
}
event.respondWith(new Response("Internal Error", { status: 500 }));
}
});
I am using a standard Next.js app
This is perhaps not the answer you are looking for but Cloudflare Pages is a lot more suited for deploying JAMstack style sites on Cloudflare. They have specific instructions for next.js and I would suggest using this product if you are able to make it work. It has the added benefit of being a lot easier to deploy as well, simply push to a branch on GitHub and CloudFlare will do the rest.

RequireJs Google maps google is not define

I have a simple nodejs project that should load asynchronously the google maps api javascript, i followed this answer https://stackoverflow.com/a/15796543
and my app.js is like this:
var express = require("express"),
app = express(),
bodyParser = require("body-parser"),
methodOverride = require("method-override");
https = require("https");
requirejs = require('requirejs');
requirejs.config({
waitSeconds : 500,
isBuild: true,
paths : {
'async': 'node_modules/requirejs-plugins/src/async',
}
});
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.use(methodOverride());
var router = express.Router();
router.get('/', function(req, res) {
res.send("Hello World!");
});
requirejs(["async!http://maps.google.com/maps/api/js?key=mykey&sensor=false"], function() {
console.log(google);
});
app.listen(3000, function() {
console.log("asd");
});
package.json:
{
"name": "rest-google-maps-api",
"version": "2.0.0",
"dependencies": {
"express": "^4.7.1",
"method-override": "^2.1.2",
"body-parser": "^1.5.1",
"requirejs": "2.3.3",
"requirejs-plugins": "1.0.2"
}
}
i've got always the same error:
ReferenceError: google is not defined
The main issue here is that you are trying to run in Node code that is really meant to be used in a browser.
The async plugin
This plugin needs to be able to add script elements to document and needs window. I see you set isBuild: true in your RequireJS configuration. It does silence the error that async immediately raises if you do not use this flag, but this is not a solution because:
isBuild is really meant to be set internally by RequireJS's optimizer (or any optimizer that is compatible with RequireJS), not manually like you are doing.
isBuild means to indicate to plugins that they are running as part of an optimization run. However, your code is using the plugin at run time rather than as part of an optimization. So setting isBuild: true is a lie and will result in undesirable behavior. The async plugin is written in such a way that it effectively does nothing if isBuild is true. Other plugins may crash.
Google's Map API
It also expects a browser environment. The very first line I see when I download its code is this:
window.google = window.google || {};
Later in the code there are references to window.document and window.postMessage.
I don't know if it is possible to run the code you've been trying to load from Google in Node. I suspect you'd most likely need something like jsdom to provide a browser-like environment to the API.
assuming you did everything else correctly, which I am not testing here. The reason you are getting the error is because you call console.log(google) and there is no google variable. You need to pass google in as a reference in your call back function. This will either get rid of the error, or change the error if you have set up requirejs incorrectly.
requirejs(["async!http://maps.google.com/maps/api/js?key=mykey&sensor=false"],
function( **google** ) {
console.log(google);
});
see the requirejs docs http://requirejs.org/docs/node.html#1

Resources