I'm using the T3-app (nextjs, tRPC, etc), and I don't know if these env variable errors just happened, or if I haven't noticed them before. However, I have all of the environment variables set in the .env file and have the following configuration set in the schema.mjs file:
export const serverSchema = z.object({
DATABASE_URL: z.string().url(),
NODE_ENV: z.enum(["development", "test", "production"]),
NEXTAUTH_SECRET: z.string(),
NEXTAUTH_URL: z.preprocess(
// This makes Vercel deployments not fail if you don't set NEXTAUTH_URL
// Since NextAuth automatically uses the VERCEL_URL if present.
(str) => process.env.VERCEL_URL ?? str,
// VERCEL_URL doesnt include `https` so it cant be validated as a URL
process.env.VERCEL ? z.string() : z.string().url(),
),
GOOGLE_CLIENT_ID: z.string(),
GOOGLE_CLIENT_SECRET: z.string(),
STRIPE_SECRET_KEY: z.string(),
});
export const serverEnv = {
DATABASE_URL: process.env.DATABASE_URL,
NODE_ENV: process.env.NODE_ENV,
NEXTAUTH_SECRET: process.env.NEXTAUTH_SECRET,
GOOGLE_CLIENT_ID: process.env.GOOGLE_CLIENT_ID,
GOOGLE_CLIENT_SECRET: process.env.GOOGLE_CLIENT_SECRET,
NEXTAUTH_URL: process.env.NEXTAUTH_URL,
STRIPE_SECRET_KEY: process.env.STRIPE_SECRET_KEY,
};
However, the process.env object is undefined. The only one that has a value is NODE_ENV but it isn't any different than the rest of the env variables.
I'm pretty lost on why this is happening. I've looked up this problem, but nothing is coming up. Am I doing something incorrectly?
Related
I am using github actions where i am storing some secrets and they will be available as environment variables. I want to access these variables form my renovate config.js files
process.ENV.VARIABLE_NAME does not seem to work
There seems to be a PR that introduced this features but it is not document how it shall be used: https://github.com/renovatebot/renovate/pull/8321/files#
Here is my renovate-config.js file:
module.exports = {
platform: 'github',
logLevel: 'debug',
labels: ['renovate', 'dependencies', 'automated'],
onboarding: true,
onboardingConfig: {
extends: ['config:base', 'disableDependencyDashboard']
},
cacheDir: "/tmp/renovate",
renovateFork: true,
gitAuthor: "renovate <renovate#hhpv.de>",
username: "Renovate",
onboarding: false,
printConfig: true,
requireConfig: false,
logLevel: "DEBUG",
baseBranches: ["ecr-renovate"],
customEnvVariables: {
// what should i put here
},
hostRules: [
{
hostType: 'docker',
matchHost: '123456456.dkr.ecr.eu-central-1.amazonaws.com',
//username: process.env.AWS_ACCESS_KEY,
//password: process.env.AWS_SECRET_KEY
},
],
};
It seems renovate does not understand environment variables inside its config file, at least I could not find a working example, too.
You can however provide parts of the renovate config as environment variables, where other environment variables can be resolved.
In my case I had to provide an access token for a private maven repository, and this is what I did in my gitlab-ci.yml:
variables:
RENOVATE_HOST_RULES: '[{"matchHost": "https://gitlab.company.com/api/v4/groups/myprojectgroup/-/packages/maven", "token": "$CI_JOB_TOKEN"}]'
If you take a look into renovates debug log you should find an entry like this when the config is picked up:
"msg":"Adding token authentication for https://gitlab.company.com/api/v4/groups/myprojectgroup/-/packages/maven to hostRules","time":"2022-12-02T12:59:54.402Z","v":0}
I'm running into a weird situation here that debugging is becoming extremely hard to debug.
I have made a post here https://forum.cleavr.io/t/cloudflare-caching-in-a-quasar-app/844 thinking that the problem was about caching.
We are having hydration errors in our webapp codotto.com ONLY after a few hours (or minutes depending on website traffic). As soon as I redeploy the app, everything works well. No hydration errors anymore.
We had the idea that it was caching but we have disabled caching completely in our Cloudflare dashboard:
Then we can verify that the cache is not being used:
Note the CF-Cache-Status set to DYNAMIC (refer to here to see that when you have DYNAMIC set in CF-Cache-Status header it means that cache is not being used).
Locally the app works great and we are not able to reproduce the issue locally and in staging as well. This only happens in production. Here we have configurations for pm2 settings:
production
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxx/codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 7145,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
staging
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "staging.codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxxx/staging.codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 9892,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
We are running out of ideas and this only happens in production, making it extremely hard to debug and we are scoping the problem down to the server configuration.
We understand that this question might be tagged as off-topic since it's not too specific but what I'm looking for are some ideas on things that might help debug this issue. The webapp is done using Quasar framework in SSR mode and we are using https://cleavr.io/ to deploy our application.
We have tried following this guide on Quasar's website to debug hydration errors but haven't gotten us anywhere.
In case you would like to reproduce the bug, you will need to sign up to an account in codotto.com, then visit https://codotto.com so that you are redirected to the dashboard instead of the landing page.
Can anyone here help or explain why we have these hydration errors?
The problem was not related to caching or any other issues we thought it was related.
In one of our bootfiles we had the following:
export default boot(({ router }) => {
router.beforeEach((to, from, next) => {
const requiresAuthentication = to.matched.some(
(record) => record.meta.needsAuthentication
);
const appStore = useAppStore();
if (requiresAuthentication && !appStore.isLoggedIn) {
next({ name: 'auth-login' });
} else {
const onlyGuestCanSee = to.matched.some(
(record) => record.meta.onlyGuestCanSee
);
if (onlyGuestCanSee && appStore.isLoggedIn) {
next({ name: 'dashboard' });
} else {
next();
}
}
});
});
In this file we didn't pass the store to useAppStore causing the request pollution and, consequently, hydration errors. The fix was to pass store to useAppStore:
export default boot(({ router, store }) => {
const appStore = useAppStore(store);
...
})
I have an app that authenticates against Azure AD using next-auth 4.
It works fine and I can authenticate.
When I try to configure a custom base path http://localhost:3000/myapp for the app and call signIn() I get a 404 error telling me that http://localhost:3000/api/auth/error is not found.
I have configured all the paths like described below.
What confuses me:
If I use the standard path I notice that [...nextauth.js] gets executed
If I use a custom path, [...nextauth.js] is not even executed
A similar scenario in next.auth 3 worked just fine.
Comparing to next-auth 3 it looks like next-auth 4 does not honor NEXTAUTH_URL.
What can I do?
.env.local
AZURE_CLIENT_ID= ....
AZURE_CLIENT_SECRET=...
AZURE_TENANT_ID=...
JWT_SECRET=...
NEXTAUTH_URL=http://localhost:3000/myapp/api/auth
NEXTAUTH_URL_INTERNAL=http://localhost:3000/mypp/api/auth
NEXT_AUTH_DEBUG=true
APP_BASE_PATH=/myapp
next.config.js:
const basePath = process.env.APP_BASE_PATH;
module.exports = {
reactStrictMode: true,
basePath: `${basePath}`,
};
[...nextauth.js]`
export default NextAuth({
providers: [
AzureADProvider({
clientId: process.env.AZURE_CLIENT_ID,
clientSecret: process.env.AZURE_CLIENT_SECRET,
scope: "offline_access User.Read",
tenantId: process.env.AZURE_TENANT_ID,
}),
],
....
debug: process.env.NEXT_AUTH_DEBUG,
secret: process.env.JWT_SECRET,
});
Did you've tried that solution ?
Try to edit your _app.tsx and define your base path using the props basePath on your , as in that example.
My understanding is that I fall into Group 1 as those who are;
running a [nextjs] monorepo and therefore they want to be able to import their other packages from node_modules.
And running into an error similar to this:
../../node_modules/#waweb/base-ui.theme.brand-definition/dist/brand-definition.module.scss
CSS Modules cannot be imported from within node_modules. Read more:
https://nextjs.org/docs/messages/css-modules-npm Location:
../../node_modules/#waweb/base-ui.theme.brand-definition/dist/index.js
The official solution is next-transpile-modules, but as soon as I add any packages to the list of modules, I start getting errors in CSS modules in local source.
../../libs/ui/src/lib/contact.module.css
CSS Modules cannot be imported from within node_modules.
Read more: https://nextjs.org/docs/messages/css-modules-npm
Location: ../../libs/ui/src/lib/learn-more.tsx
Import trace for requested module:
../../libs/ui/src/lib/learn-more.tsx
../../libs/ui/src/lib/home.tsx
./pages/index.tsx
This is repeated for all components that were previously working.
I have prepared a branch in a public repo that has a full ci/cd and gitpod dev env configured that demonstrates the critical change.
Let's assume the sources to the components I am attempting to transpile are located in the correct node_modules dir, and I am using the following next config:
// eslint-disable-next-line #typescript-eslint/no-var-requires
const withNx = require('#nrwl/next/plugins/with-nx');
const withPlugins = require('next-compose-plugins');
const withTM = require('next-transpile-modules')(
[
'#waweb/base-ui.theme.colors',
'#waweb/base-ui.theme.color-definition',
'#waweb/base-ui.theme.size-definition',
'#waweb/base-ui.theme.shadow-definition',
'#waweb/base-ui.theme.brand-definition',
'#waweb/base-ui.theme.theme-provider',
],
{ debug: true }
);
const withPWA = require('next-pwa');
/**
* #type {import('#nrwl/next/plugins/with-nx').WithNxOptions}
**/
const nextConfig = {
nx: {
// Set this to true if you would like to to use SVGR
// See: https://github.com/gregberge/svgr
svgr: true,
},
images: {
domains: [
'www.datocms-assets.com',
'a.storyblok.com',
'images.ctfassets.net',
'images.prismic.io',
'cdn.aglty.io',
'localhost', // For Strapi
],
imageSizes: [24, 64, 300],
},
};
const pwaConfig = {};
const plugins = [[withNx], [withPWA, pwaConfig]];
module.exports = withTM(withPlugins([...plugins], nextConfig));
Any idea what's wrong with my setup here?
Thank you all for any thoughts as to what I'm doing wrong here.
Cheers!
edit
For some additional context, I have tried many different variations, and the one I ended up on (shown above) is what got the module transpilation to actually work, according to the debug statements. Only now do I have the reported errors in modules that are actually source components, not node_modules. The usage of the plugin at all seems to break unrelated functionality.
It looks odd to me that you are wrapping withPuglins inside of withTM...
withTM is a plugin so I would imagine it should be more this format:
module.exports = withPlugins([
withTM
], nextConfig);
This seems to be what's expected when looking at the docs:
https://www.npmjs.com/package/next-transpile-modules
https://www.npmjs.com/package/next-compose-plugins
when i tried to deployed the cloud functions. i am facing the error below..
before update the node version it was working fine
node#14
firebase cli up-to date
nom also up-to date
const functions = require('firebase-functions')
const admin = require('firebase-admin');
const nodemailer = require('nodemailer');
const cors = require('cors')({ origin: true });
admin.initializeApp()
exports.sendcertificate = functions.firestore.document('certificate/{docId}')
.onCreate((snap: { data: () => any; }, ctx: any) => {
const data = snap.data();
let authData = nodemailer.createTransport({
host: 'mail.bacttraining.com',
port: 465,
secure: true, // use SSL
auth: {
user: *******',
pass: *******',
},
});
authData.sendMail({
from: ********,
to: *********,
Bcc: '*******',
sender: "*******",
subject: "Certificate Request",
text: `${data.course}`,
html: *******,
}).then(console.log("email send sussfully"))
.catch(console.error('we cant send email : ', console.error()
));
}
);**strong text**
Make sure your CLI tools are up to date, and that your modules in use are the latest. I can see that the console is warning you that cloud functions are outdated.
Then ensure all functions that did not deploy for any bugs and syntax errors as the cloud functions uploader can crash if the code was packed incorrectly.
Once the above has been done, you can try deploying the functions one at a time with
firebase deploy --only functions:functionName
This will narrow any functions that have bugs or syntax errors down.
This issue occurs when there is difference in the node version you have installed in the system and the node engine version mentioned in package.json file.
Please check your node version using
node -v
Make sure you have mentioned the same engine version in package.json