Problem to generate the client generator component of API Patform with nuxt.js - symfony

I have an api-platform project.
https://localhost:8888/api does show the API documentation.
When i want to generate the client generator component with the command:
npx #api-platform/client-generator https://127.0.0.1:8000/api . --generator nuxt
I have this response:
{
api: Api { entrypoint: 'https://127.0.0.1:8000/api', resources: [] },
error: {
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: [Object],
[Symbol(Response internals)]: [Object]
}
},
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: { body: [PassThrough], disturbed: false, error: null },
[Symbol(Response internals)]: {
url: 'https://127.0.0.1:8000/api',
status: 200,
statusText: 'OK',
headers: [Headers],
counter: 0
}
},
status: 200
}
No components have been generated and not sure where to go from there.

I did the setup successfully on my side (until I got a CORS issue probably because I was using https://demo.api-platform.com).
Here is my Github repo for a working client side boilerplate'd Nuxt app that I've got.
Some things are questionable like the usage of Moment.js in 2022, the fact that this is a Nuxt app used as an SPA only (ssr: false), some not up to-date configuration like for nuxt-i18n modules, the components: false, the fact that there are still some ESlint warnings/errors in the various files, the use of an additional vue-i18n package on top of nuxt-i18n but the setup looks kinda okay otherwise.
I mean, at the end it's kinda removing all the nice stuff from Nuxt to have a basic usual Vue.js app as an SPA but yeah, fine let's say.
There is this line in entrypoint.js
export const ENTRYPOINT = 'https://demo.api-platform.com'
Maybe setting this one up to your own local address may work.
If it doesn't, maybe try to host it on Heroku or alike, a real hosted server is maybe needed. Otherwise, it could also be a migration/DB/any-other-backend issue.
It's not an issue on the Nuxt.js side itself at least.

Related

Why hydration errors happens only in production after few hours of deploying quasar application

I'm running into a weird situation here that debugging is becoming extremely hard to debug.
I have made a post here https://forum.cleavr.io/t/cloudflare-caching-in-a-quasar-app/844 thinking that the problem was about caching.
We are having hydration errors in our webapp codotto.com ONLY after a few hours (or minutes depending on website traffic). As soon as I redeploy the app, everything works well. No hydration errors anymore.
We had the idea that it was caching but we have disabled caching completely in our Cloudflare dashboard:
Then we can verify that the cache is not being used:
Note the CF-Cache-Status set to DYNAMIC (refer to here to see that when you have DYNAMIC set in CF-Cache-Status header it means that cache is not being used).
Locally the app works great and we are not able to reproduce the issue locally and in staging as well. This only happens in production. Here we have configurations for pm2 settings:
production
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxx/codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 7145,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
staging
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "staging.codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxxx/staging.codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 9892,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
We are running out of ideas and this only happens in production, making it extremely hard to debug and we are scoping the problem down to the server configuration.
We understand that this question might be tagged as off-topic since it's not too specific but what I'm looking for are some ideas on things that might help debug this issue. The webapp is done using Quasar framework in SSR mode and we are using https://cleavr.io/ to deploy our application.
We have tried following this guide on Quasar's website to debug hydration errors but haven't gotten us anywhere.
In case you would like to reproduce the bug, you will need to sign up to an account in codotto.com, then visit https://codotto.com so that you are redirected to the dashboard instead of the landing page.
Can anyone here help or explain why we have these hydration errors?
The problem was not related to caching or any other issues we thought it was related.
In one of our bootfiles we had the following:
export default boot(({ router }) => {
router.beforeEach((to, from, next) => {
const requiresAuthentication = to.matched.some(
(record) => record.meta.needsAuthentication
);
const appStore = useAppStore();
if (requiresAuthentication && !appStore.isLoggedIn) {
next({ name: 'auth-login' });
} else {
const onlyGuestCanSee = to.matched.some(
(record) => record.meta.onlyGuestCanSee
);
if (onlyGuestCanSee && appStore.isLoggedIn) {
next({ name: 'dashboard' });
} else {
next();
}
}
});
});
In this file we didn't pass the store to useAppStore causing the request pollution and, consequently, hydration errors. The fix was to pass store to useAppStore:
export default boot(({ router, store }) => {
const appStore = useAppStore(store);
...
})

pino-datadog-transport with Next.js on Vercel

I'm trying to migrate a Next.js project running on Vercel from
"pino-datadog": "2.0.2",
"pino-multi-stream": "6.0.0",
to
"pino": "8.4.2",
"pino-datadog-transport": "1.2.2",
and I copy the setup from the pino-datadog-transport's README.md:
import { LoggerOptions, pino } from 'pino'
const pinoConf: LoggerOptions = {
level: 'trace',
}
const logger = pino(
pinoConf,
pino.transport({
target: 'pino-datadog-transport',
options: {
ddClientConf: {
authMethods: {
apiKeyAuth: process.env.DATADOG_API_KEY,
},
},
ddServerConf: {
site: 'datadoghq.eu',
},
service: process.env.VERCEL_URL
ddsource: 'nodejs',
},
}),
)
and this seems to be working fine locally, but when I publish it on Vercel and run it there I get the following error:
ERROR Error: unable to determine transport target for "pino-datadog-transport"
at fixTarget (/var/task/node_modules/pino/lib/transport.js:136:13)
at Function.transport (/var/task/node_modules/pino/lib/transport.js:110:22)
Am I missing some additional config to get this working? Anyone else running this setup or something similar to get explicit logs working on Vercel with Next.js?
I have enabled the Datadog integration in Vercel as well, but that only forwards Next.js logs, not explicit console.logs or standard Pino logs from what I can tell.
The solution to this problem is to import even though nothing in the import is actually used in the code.
It seems Next.js strips away all code that isn't imported when the code is deployed.
So, adding
import 'pino-datadog-transport'
at the top of the file solves the problem.

Request header field sentry-trace is not allowed by Access-Control-Allow-Headers in preflight response

I'm developing a Frontend using NextJS and Keycloak for auth-purpose. After adding Sentry, I'm facing this issue here, where the token endpoint of Keycloak is returning an error; So I can log in.
I've tried many things:
Change the web-origin config of Keycloak, which (obviously) doesn't change or solves the problem
Play with the Sentry client config, without success, because the denyUrls property still make the Sentry SDK send the sentry-trace into the request.
Now I don't have any more Idea, so I coming here for more help.
So after some investigations, I came across this tracingOrigins property that can be set using integrations like this:
integrations: [
new (Sentry.Integrations as any).BrowserTracing({
tracingOrigins: [
process.env.NEXT_PUBLIC_URL,
process.env.NEXT_PUBLIC_BACKEND_URL,
process.env.NEXT_PUBLIC_MATOMO_URL,
],
}),
],
This config is done inside the sentry.client.config.ts file. The downside is that, urls which are not included there, are simply not tracked.
Unfortunately, Keycloak has hardcoded list of allowed headers, so you can't configure Keycloak for sentry-trace header.
You can have some non ideal work arounds:
don't use sentry
compile own hacked Keycloak version, where you allow that header
add reverse proxy in front of Keycloak, which will add sentry-trace header to allowed headers
...
I've solved this issue on a nextJs application by adding the following header to the static sourcemap response.
'Access-Control-Allow-Headers' on next.config.js
const CONFIG = {
headers: () => [
{
source: "/_next/:path*",
headers: [
{ key: "Access-Control-Allow-Origin", value: SHOP_ORIGIN },
{ key: 'Access-Control-Allow-Headers', value: '*' },
],
},
],
}

Firebase service account private key exposed to admin.firestore

I configured firebase admin in my node.js backend into a variable called admin, and I call admin.firestore(). When I do console.log(admin.firestore()), I see my private key of my service account been displayed in the back-end terminal. Here is the console log I see:
Firestore {
_settings: {
credentials: {
private_key: 'my actual private key',
client_email: 'xxxxx'
},
projectId: 'pxxxx3',
firebaseVersion: '8.13.0',
libName: 'gccl',
libVersion: '3.8.6 fire/8.13.0'
},
_settingsFrozen: false,
_serializer: Serializer { createReference: [Function], allowUndefined: false },
_projectId: 'xxxxx',
registeredListenersCount: 0,
_lastSuccessfulRequest: 0,
_backoffSettings: { initialDelayMs: 100, maxDelayMs: 60000, backoffFactor: 1.3 },
_preferTransactions: false,
_clientPool: ClientPool {
concurrentOperationLimit: 100,
maxIdleClients: 1,
clientFactory: [Function],
clientDestructor: [Function],
activeClients: Map {},
terminated: false,
terminateDeferred: Deferred {
resolve: [Function],
reject: [Function],
promise: [Promise]
}
}
}
I am a bit concerned that it might be a security risk. Although it is within the codes in my backend. But should I be concerned?
If data is only ever available on your backend, then it is "secure" in that only people who have permission to access your backend can see it. The problem is not that the data is in the log, the problem is in who you allow to see that log.
If the data never escapes to a client app, then you don't have to worry about random people on the internet from seeing your credentials.
IMHO, if an external entity can log into your system, you have a different kind of problem.
If you think about it, most of the environment variables have to be placed somewhere during runtime. They should not be hardcoded in your code, but in runtime, you need a mechanism to ensure the values are copied into your system. After that, it's all about authorization, only users with the right permissions should be allowed to get into your system.

this.user().context is undefined - Jovo Framework - Alexa

I'm currently using Jovo for cross platform developing Alexa and Google Assistant's skills/actions.
I currently hit a roadblock in which I'm trying to get the previous intent by doing either:
this.user().context.prev[0].request.intent or
this.user().getPrevIntent(0).
But it hasn't worked. I get context is undefined and getPrevIntent doesn't exist. According to the Docs, I need to set up a table with DynamoDB (I did, and verified that it's working since Jovo is able to store the user object), and passed in the default configuration to App. But still can't seem to get it work. Any ideas?
const config = {
logging: false,
// Log incoming JSON requests.
// requestLogging: true,
/**
* You don't want AMAZON.YesIntent on Dialogflow, right?
* This will map it for you!
*/
intentMap: {
'AMAZON.YesIntent': 'YesIntent',
'AMAZON.NoIntent': 'NoIntent',
'AMAZON.HelpIntent': 'HelpIntent',
'AMAZON.RepeatIntent': 'RepeatIntent',
'AMAZON.NextIntent': 'NextIntent',
'AMAZON.StartOverIntent': 'StartOverIntent',
'AMAZON.ResumeIntent': 'ContinueIntent',
'AMAZON.CancelIntent': 'CancelIntent',
},
// Configures DynamoDB to persist data
db: {
awsConfig,
type: 'dynamodb',
tableName: 'user-data',
},
userContext: {
prev: {
size: 1,
request: {
intent: true,
state: true,
inputs: true,
timestamp: true,
},
response: {
speech: true,
reprompt: true,
state: true,
},
},
},
};
const app = new App(config);
Thanks 😊
To make use of the User Context Object of the Jovo Framework, you need to have at least v1.2.0 of the jovo-framework.
You can update the package to the latest version like this: npm install jovo-framework --save
(This used to be a comment. Just adding this as an answer so other people see it as well)

Resources