How to access NextJS buildid during build time? - next.js

I'm doing my first NextJS project, and currently looking at logging. Looking around, I found this blog post, which suggests configuring Pino to log the commit SHA in each message. This makes sense to me for a lot of reasons. The question is, how do I get the SHA when Next is doing its static build? In the blog example, they're deploying on Vercel, and get it from there:
const logger = pino({
base: {
env: process.env.NODE_ENV,
revision: process.env.VERCEL_GITHUB_COMMIT_SHA,
},
});
But I almost certainly will not be deploying on Vercel. Equally, there's lots of documentation on how to generate a custom build ID, which comes down to doing something like this in next.config.js:
const nextConfig = {
generateBuildId: async () => {
const buildId = await determineBuildId()
console.log(`> Build ID: ${buildId}`)
return buildId
},
}
But I can't find any documentation on how to actually access the buildId within Next, so that it's a constant in my logging:
const logger = pino({
base: {
env: process.env.NODE_ENV,
revision: // how do I access the generated buildId here?
},
});
This really must just be a failing of my google-fu, because this seems like it should be common practice, but ... I can't find it. Any ideas? Or pointers to documentation?

Related

Identifier 'module' has already been declared - amplify and nuxt 3

I am getting an error in nuxt3 then setting up this amplify plugin. I am trying to add auth to nuxt3 via plugins
plugins/amplify.js
import Amplify, {withSSRContext} from 'aws-amplify';
export default defineNuxtPlugin((ctx) => {
const awsConfig = {
Auth: {
region: "ap-south-1",
userPoolId: "ap-south-1_#########",
userPoolWebClientId: "#####################",
authenticationFlowType: "USER_SRP_AUTH",
},
};
Amplify.configure({ ...awsConfig, ssr: true });
if (process.server) {
const { Auth } = withSSRContext(ctx.req);
return {
provide: {
auth: Auth,
},
};
}
return {
provide: {
auth: Auth,
},
};
}
[nuxt] [request error] Identifier 'module' has already been declared
at Loader.moduleStrategy (internal/modules/esm/translators.js:145:18)
at async link (internal/modules/esm/module_job.js:67:21)
Does anyone know what's going on?
Been facing this myself... I don't think its a nuxt problem but rather Vite.
I gave up on running the app on dev mode and just resorted to building the app and launching it. Also, in order to use aws-amplify with vite you need to apply some workarounds:
https://ui.docs.amplify.aws/vue/getting-started/troubleshooting
For the window statement (which only makes sense on the browser) you'll need to wrap that with an if statement. Added this to my plugin file
if (process.client) {
window.global = window;
var exports = {};
}
This will let you build the project and run it with npm run build . Far from ideal but unless someone knows how to fix that issue with dev in vite...
BTW, you can also just switch to webpack builder on nuxt settings and the issue goes away.
// https://v3.nuxtjs.org/api/configuration/nuxt.config
export default defineNuxtConfig({
builder: "webpack"
});
I think this might be a problem with auto imports of nuxt.
I added a ~/composables/useBucket.ts file which I used in ~/api. Same error started popping up the next day. After I moved ~/composables/useBucket.ts to ~/composablesServer/useBucket.ts issue disappeared.

Build fails while building SSR/ISR pages with new API routes

I am getting issues while building new ISR/SSR pages with getStaticProps and getStaticPaths
Brief explanation:
While creating ISR/SSR pages and adding new API route never existed before, building on Vercel fails because of building pages before building API routes (/pages/api folder)
Detailed explanation:
A. Creating next SSR page with code (/pages/item/[pid].tsx)
export async function getStaticProps(context) {
const pid = context.params.pid;
//newly created API route
const res = await fetch(process.env.APIpath + '/api/getItem?pid=' + (pid));
const data = await res.json();
return {
props: {item: data}
}
}
export async function getStaticPaths(context) {
//newly created API route
let res = await fetch(process.env.APIpath + '/api/getItemsList')
const items = await res.json()
let paths = []
//multi-language support for the pages
for (const item of items){
for (const locale of context.locales){
paths.push({params: {pid: item.url }, locale: locale})
}
}
return { paths, fallback: false }
}
B. Local checks work, deploying to Vercel
C. During deploying Vercel triggers an error because trying to get data from the API route doesn't exist yet. (Vercel is deploying /pages/item/[pid].tsx first and /api/getItemsList file after). Vercel trying to get data from https://yourwebsite.com/api/getItemsList which does not exist.
Only way I am avoiding this error:
Creating API routes needed
Deploying project to Vercel
Creating [pid].tsx page/s and then deploy it
Deploying final version of code
The big issue with my approach is you are making 1 deployment you don't actually. The problems appears also while remaking the code for your API routes also.
Question: there is an way/possiblity to force Versel to deploy firstly routes and than pages?
Any help appreciated

Nuxt plugin not available in Vuex's 'this' in Firebase production (ERROR: this.$myPlugin is not a function)

I'm trying to upload my nuxt app to Firebase as cloud function. The problem is that in the nuxtServerInit action I'm trying to call a plugin function, which apparently is not yet defined at that moment, because an error is thrown: (ERROR: this.$myPlugin is not a function). The code works in dev mode, it's just after upload to Firebase it fails.
The setup is as follows:
myPlugin.js
let env, auth, app, $store;
export default (context, inject) => {
env = context.app.context.env;
auth = context.app.$fire.auth;
app = context.app;
$store = context.store;
inject('myPlugin', myPlugin);
};
async function myPlugin(...) {... }
nuxt.config.js
plugins: [
{ src: '~/plugins/myPlugin', mode: 'all' }, // with no mode specified it fails too
],
vuex index.js
export const actions = {
async nuxtServerInit({ dispatch, commit }, { req }) {
const tl = await dispatch("initAction");
return tl;
}
}
vuex someModule.js
const actions = {
initAction({ commit }) {
return this.$myPlugin(...).then(...) // this line throws '$myPlugin is not a function' error
}
}
What can be the reason for the different behaviour in dev and in prod modes and how could I fix the problem?
UPDATE:
After further testing I established that the problem is not caused by the nuxtServerInit timing. I moved the call of the initAction from nuxtServerInit to a page's created hook. However the same error appears: this.$query is not a function.
The problem occured, because js files were not getting fully loaded due to CORB errors caused by incorrect configuration. Details described in this question.

How to use Nuxt.js served by cloud functions without huge loading times?

I love using vue.js, but for a few upcoming projects I'm going to need to use Nuxt in order to rank better for SEO. I don't have a lot of experience setting up Nuxt but was able to find quite a lot of good tutorials on how to serve up a express app running nuxt. I usually integrate my projects with firebase for auth and db as well as cloud functions, so I would use cloud functions to serve up these nuxt web apps as well..
But... Setting everything up isn't all that hard... the loading times on the other end are terrible and I feel like this renders any use case close to unusable. Regular websites / apps load (almost) instantly when having a good internet connection, but my test-nuxt-websites consistently show a white screen for 5 - 10 seconds before showing the page.
I think this has to do with the cold start of cloud functions, but then how would you go about implementing nuxt? I don't understand what the alternative is, is there? Can't imagine companies would appreciate such long loading times (and google neither)
A quick demo project I set up with Nuxt: https://yke.plus/
My functions code:
const functions = require('firebase-functions')
const { Nuxt } = require('nuxt')
const express = require('express')
const app = express()
const config = {
dev: false
}
const nuxt = new Nuxt(config)
let isReady = false
const readyPromise = nuxt
.ready()
.then(() => {
isReady = true
})
.catch(() => {
process.exit(1)
})
async function handleRequest (req, res) {
if (!isReady) {
await readyPromise
}
res.set('Cache-Control', 'public, max-age=1, s-maxage=1')
await nuxt.render(req, res)
}
app.get('*', handleRequest)
app.use(handleRequest)
exports.nuxtssr = functions.https.onRequest(app)
Maybe late but try to change your cache control to a higher value like:
res.set('Cache-Control', 'public, max-age=15778476, s-maxage=15778476')
Also you can config cache control for the static resources in firebase.json:
"headers": [
...
{
"source": "**/*.#(jpg|jpeg|gif|png|css|js|ico|svg)",
"headers": [
{
"key": "Cache-Control",
"value": "public, max-age=31536000, s-maxage=31536000"
}
]
},
...
]
https://firebase.google.com/docs/hosting/full-config?hl=pt-br#headers

Log 'jsonPayload' in Firebase Cloud Functions

TL;DR;
Does anyone know if it's possible to use console.log in a Firebase/Google Cloud Function to log entries to Stack Driver using the jsonPayload property so my logs are searchable (currently anything I pass to console.log gets stringified into textPayload).
I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.
I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '#google-cloud/logging-bunyan' so my logs go to Stack Driver.
Aside: Using this configuration in Google Cloud Functions is causing issues with Error: Endpoint read failed which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.
So now I'm trying to log using console.log(arg) where arg is an object, not a string. I want this object to appear in Stack Driver under the jsonPayload but it's being stringified and put into the textPayload field.
It took me awhile, but I finally came across this example in firebase functions samples repository. In the end I settled on something a bit like this:
const Logging = require('#google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
resource: {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME ,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
},
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)
You can add a string severity property value to logMetaData (e.g. "INFO" or "ERROR"). Here is the list of possible values.
Update for available node 10 env vars. These seem to do the trick:
labels: {
function_name: process.env.FUNCTION_TARGET,
project: process.env.GCP_PROJECT,
region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}
UPDATE: Looks like for Node 10 runtimes they want you to set env values explicitly during deploy. I guess there has been a grace period in place because my deployed functions are still working.
I ran into the same problem, and as stated by comments on #wtk's answer, I would like to add replicating all of the default cloud function logging behavior I could find in the snippet below, including execution_id.
At least for using Cloud Functions with the HTTP Trigger option the following produced correct logs for me. I have not tested for Firebase Cloud Functions
// global
const { Logging } = require("#google-cloud/logging");
const logging = new Logging();
const Log = logging.log("cloudfunctions.googleapis.com%2Fcloud-functions");
const LogMetadata = {
severity: "INFO",
type: "cloud_function",
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
}
};
// per request
const data = { foo: "bar" };
const traceId = req.get("x-cloud-trace-context").split("/")[0];
const metadata = {
...LogMetadata,
severity: 'INFO',
trace: `projects/${process.env.GCLOUD_PROJECT}/traces/${traceId}`,
labels: {
execution_id: req.get("function-execution-id")
}
};
Log.write(Log.entry(metadata, data));
The github link in #wtk's answer should be updated to:
https://github.com/firebase/functions-samples/blob/2f678fb933e416fed9be93e290ae79f5ea463a2b/stripe/functions/index.js#L103
As it refers to the repository as of when the question was answered, and has the following function in it:
// To keep on top of errors, we should raise a verbose error report with Stackdriver rather
// than simply relying on console.error. This will calculate users affected + send you email
// alerts, if you've opted into receiving them.
// [START reporterror]
function reportError(err, context = {}) {
// This is the name of the StackDriver log stream that will receive the log
// entry. This name can be any valid log stream name, but must contain "err"
// in order for the error to be picked up by StackDriver Error Reporting.
const logName = 'errors';
const log = logging.log(logName);
// https://cloud.google.com/logging/docs/api/ref_v2beta1/rest/v2beta1/MonitoredResource
const metadata = {
resource: {
type: 'cloud_function',
labels: {function_name: process.env.FUNCTION_NAME},
},
};
// https://cloud.google.com/error-reporting/reference/rest/v1beta1/ErrorEvent
const errorEvent = {
message: err.stack,
serviceContext: {
service: process.env.FUNCTION_NAME,
resourceType: 'cloud_function',
},
context: context,
};
// Write the error log entry
return new Promise((resolve, reject) => {
log.write(log.entry(metadata, errorEvent), (error) => {
if (error) {
return reject(error);
}
resolve();
});
});
}
// [END reporterror]

Resources