I am getting issues while building new ISR/SSR pages with getStaticProps and getStaticPaths
Brief explanation:
While creating ISR/SSR pages and adding new API route never existed before, building on Vercel fails because of building pages before building API routes (/pages/api folder)
Detailed explanation:
A. Creating next SSR page with code (/pages/item/[pid].tsx)
export async function getStaticProps(context) {
const pid = context.params.pid;
//newly created API route
const res = await fetch(process.env.APIpath + '/api/getItem?pid=' + (pid));
const data = await res.json();
return {
props: {item: data}
}
}
export async function getStaticPaths(context) {
//newly created API route
let res = await fetch(process.env.APIpath + '/api/getItemsList')
const items = await res.json()
let paths = []
//multi-language support for the pages
for (const item of items){
for (const locale of context.locales){
paths.push({params: {pid: item.url }, locale: locale})
}
}
return { paths, fallback: false }
}
B. Local checks work, deploying to Vercel
C. During deploying Vercel triggers an error because trying to get data from the API route doesn't exist yet. (Vercel is deploying /pages/item/[pid].tsx first and /api/getItemsList file after). Vercel trying to get data from https://yourwebsite.com/api/getItemsList which does not exist.
Only way I am avoiding this error:
Creating API routes needed
Deploying project to Vercel
Creating [pid].tsx page/s and then deploy it
Deploying final version of code
The big issue with my approach is you are making 1 deployment you don't actually. The problems appears also while remaking the code for your API routes also.
Question: there is an way/possiblity to force Versel to deploy firstly routes and than pages?
Any help appreciated
Related
I'm doing my first NextJS project, and currently looking at logging. Looking around, I found this blog post, which suggests configuring Pino to log the commit SHA in each message. This makes sense to me for a lot of reasons. The question is, how do I get the SHA when Next is doing its static build? In the blog example, they're deploying on Vercel, and get it from there:
const logger = pino({
base: {
env: process.env.NODE_ENV,
revision: process.env.VERCEL_GITHUB_COMMIT_SHA,
},
});
But I almost certainly will not be deploying on Vercel. Equally, there's lots of documentation on how to generate a custom build ID, which comes down to doing something like this in next.config.js:
const nextConfig = {
generateBuildId: async () => {
const buildId = await determineBuildId()
console.log(`> Build ID: ${buildId}`)
return buildId
},
}
But I can't find any documentation on how to actually access the buildId within Next, so that it's a constant in my logging:
const logger = pino({
base: {
env: process.env.NODE_ENV,
revision: // how do I access the generated buildId here?
},
});
This really must just be a failing of my google-fu, because this seems like it should be common practice, but ... I can't find it. Any ideas? Or pointers to documentation?
This question already has answers here:
Internal API fetch with getServerSideProps? (Next.js)
(3 answers)
Closed last year.
using getServerSideProps to do fetch internal API data, the TTFb time is really high, my page run slow.
So I'm searching for other fetching strategies, my MongoDB data is not large (DATABASE SIZE: 33.84KB), and data does not change often, the best way I think is the State generation page, the total should only 25 pages being generated, but the problem is getStateProps() method can't fetch internal API (development works, production not).
I try:
useEffect : slower than getServerProps
export the MongoDB file to data.js and put it into the project as a fake API: it can work with getStaticProp but the date I still want to storge in the database.
Host API to other domains as external: getStateProps works, approach weird
hard code every 25 page (X)
Question:
the method to improve the code and TTFB
Why getStateProps can't fetch internal API, why design like that.
I saw an article on MongoDB, here is the link, just don't use internal API and then fetch data direct to MongoDB in getStaticProps, here in my code.
BEFORE
export async function getServerSideProps() {
const response = await fetch(`${server}/api/gallery`);
const data = await response.json();
if (!data) {
return {
notFound: true,
};
}
return {
props: { data },
};
}
AFTER
export async function getStaticProps() {
await dbConnect()
//connect to mongodb
const gallery = await art.find()
//i use mongoose model to fetch data
return {
props:{
data:JSON.parse(JSON.stringify(gallery))
}
}
}
I'm trying to upload my nuxt app to Firebase as cloud function. The problem is that in the nuxtServerInit action I'm trying to call a plugin function, which apparently is not yet defined at that moment, because an error is thrown: (ERROR: this.$myPlugin is not a function). The code works in dev mode, it's just after upload to Firebase it fails.
The setup is as follows:
myPlugin.js
let env, auth, app, $store;
export default (context, inject) => {
env = context.app.context.env;
auth = context.app.$fire.auth;
app = context.app;
$store = context.store;
inject('myPlugin', myPlugin);
};
async function myPlugin(...) {... }
nuxt.config.js
plugins: [
{ src: '~/plugins/myPlugin', mode: 'all' }, // with no mode specified it fails too
],
vuex index.js
export const actions = {
async nuxtServerInit({ dispatch, commit }, { req }) {
const tl = await dispatch("initAction");
return tl;
}
}
vuex someModule.js
const actions = {
initAction({ commit }) {
return this.$myPlugin(...).then(...) // this line throws '$myPlugin is not a function' error
}
}
What can be the reason for the different behaviour in dev and in prod modes and how could I fix the problem?
UPDATE:
After further testing I established that the problem is not caused by the nuxtServerInit timing. I moved the call of the initAction from nuxtServerInit to a page's created hook. However the same error appears: this.$query is not a function.
The problem occured, because js files were not getting fully loaded due to CORB errors caused by incorrect configuration. Details described in this question.
I'm moving a MERN project into React + MongoDB Stitch after seeing it allows for easy user authentication, quick deployment, etc.
However, I am having a hard time understanding where and how can I call a site scraping function. Previously, I web scraped in Express.js with cheerio like:
app.post("/api/getTitleAtURL", (req, res) => {
if (req.body.url) {
request(req.body.url, function(error, response, body) {
if (!error && response.statusCode == 200) {
const $ = cheerio.load(body);
const webpageTitle = $("title").text();
const metaDescription = $("meta[name=description]").attr("content");
const webpage = {
title: webpageTitle,
metaDescription: metaDescription
};
res.send(webpage);
} else {
res.status(400).send({ message: "THIS IS AN ERROR" });
}
});
}
});
But obviously with Stitch no Node & Express is needed. Is there a way to fetch another site's content without having to host a node.js application just serving that one function?
Thanks
Turns out you can build Functions in MongoDB Stitch that allows you to upload external dependencies.
However, there're limitation, for example, cheerio didn't work as an uploaded external dependency while request worked. A solution, therefore, would be to create a serverless function in AWS's lambda, and then connect mongoDB stitch to AWS lambda (mongoDB stitch can connect to many third party services, including many AWS lambda cloud services like lambda, s3, kinesis, etc).
AWS lambda allows you to upload any external dependencies, if mongoDB stitch allowed for any, we wouldn't need lambda, but stitch still needs many support. In my case, I had a node function with cheerio & request as external dependencies, to upload this to lambda: make an account, create new lambda function, and pack your node modules & code into a zip file to upload it. Your zip should look like this:
and your file containing the function should look like:
const cheerio = require("cheerio");
const request = require("request");
exports.rss = function(event, context, callback) {
request(event.requestURL, function(error, response, body) {
if (!error && response.statusCode == 200) {
const $ = cheerio.load(body);
const webpageTitle = $("title").text();
const metaDescription = $("meta[name=description]").attr("content");
const webpage = {
title: webpageTitle,
metaDescription: metaDescription
};
callback(null, webpage);
return webpage;
} else {
callback(null, {message: "THIS IS AN ERROR"})
return {message: "THIS IS AN ERROR"};
}
});
};
and in mongoDB, connect to a third party service, choose AWS, enter the secret keys you got from making an IAM amazon user. In rules -> actions, choose lambda as your API, and allow for all actions. Now, in your mongoDB stitch functions, you can connect to Lambda, and that function should look like this in my case:
exports = async function(requestURL) {
const lambda = context.services.get('getTitleAtURL').lambda("us-east-1");
const result = await lambda.Invoke({
FunctionName: "getTitleAtURL",
Payload: JSON.stringify({requestURL: requestURL})
});
console.log(result.Payload.text());
return EJSON.parse(result.Payload.text());
};
Note: this slowed down performances big time though, generally, it took twice extra time for the call to finish.
TL;DR;
Does anyone know if it's possible to use console.log in a Firebase/Google Cloud Function to log entries to Stack Driver using the jsonPayload property so my logs are searchable (currently anything I pass to console.log gets stringified into textPayload).
I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.
I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '#google-cloud/logging-bunyan' so my logs go to Stack Driver.
Aside: Using this configuration in Google Cloud Functions is causing issues with Error: Endpoint read failed which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.
So now I'm trying to log using console.log(arg) where arg is an object, not a string. I want this object to appear in Stack Driver under the jsonPayload but it's being stringified and put into the textPayload field.
It took me awhile, but I finally came across this example in firebase functions samples repository. In the end I settled on something a bit like this:
const Logging = require('#google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
resource: {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME ,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
},
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)
You can add a string severity property value to logMetaData (e.g. "INFO" or "ERROR"). Here is the list of possible values.
Update for available node 10 env vars. These seem to do the trick:
labels: {
function_name: process.env.FUNCTION_TARGET,
project: process.env.GCP_PROJECT,
region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}
UPDATE: Looks like for Node 10 runtimes they want you to set env values explicitly during deploy. I guess there has been a grace period in place because my deployed functions are still working.
I ran into the same problem, and as stated by comments on #wtk's answer, I would like to add replicating all of the default cloud function logging behavior I could find in the snippet below, including execution_id.
At least for using Cloud Functions with the HTTP Trigger option the following produced correct logs for me. I have not tested for Firebase Cloud Functions
// global
const { Logging } = require("#google-cloud/logging");
const logging = new Logging();
const Log = logging.log("cloudfunctions.googleapis.com%2Fcloud-functions");
const LogMetadata = {
severity: "INFO",
type: "cloud_function",
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
}
};
// per request
const data = { foo: "bar" };
const traceId = req.get("x-cloud-trace-context").split("/")[0];
const metadata = {
...LogMetadata,
severity: 'INFO',
trace: `projects/${process.env.GCLOUD_PROJECT}/traces/${traceId}`,
labels: {
execution_id: req.get("function-execution-id")
}
};
Log.write(Log.entry(metadata, data));
The github link in #wtk's answer should be updated to:
https://github.com/firebase/functions-samples/blob/2f678fb933e416fed9be93e290ae79f5ea463a2b/stripe/functions/index.js#L103
As it refers to the repository as of when the question was answered, and has the following function in it:
// To keep on top of errors, we should raise a verbose error report with Stackdriver rather
// than simply relying on console.error. This will calculate users affected + send you email
// alerts, if you've opted into receiving them.
// [START reporterror]
function reportError(err, context = {}) {
// This is the name of the StackDriver log stream that will receive the log
// entry. This name can be any valid log stream name, but must contain "err"
// in order for the error to be picked up by StackDriver Error Reporting.
const logName = 'errors';
const log = logging.log(logName);
// https://cloud.google.com/logging/docs/api/ref_v2beta1/rest/v2beta1/MonitoredResource
const metadata = {
resource: {
type: 'cloud_function',
labels: {function_name: process.env.FUNCTION_NAME},
},
};
// https://cloud.google.com/error-reporting/reference/rest/v1beta1/ErrorEvent
const errorEvent = {
message: err.stack,
serviceContext: {
service: process.env.FUNCTION_NAME,
resourceType: 'cloud_function',
},
context: context,
};
// Write the error log entry
return new Promise((resolve, reject) => {
log.write(log.entry(metadata, errorEvent), (error) => {
if (error) {
return reject(error);
}
resolve();
});
});
}
// [END reporterror]