local development with secrets - cloudflare-workers

I am following this guide to get secrets added to my prod environment with cloudflare workers:
https://developers.cloudflare.com/workers/platform/environment-variables/#comparing-secrets-and-environment-variables
I am able to add new secrets via wrangler secret put, and I see them in the dashboard. When I run my code locally with wrangler, it doesn't look like the variables are injected. I'm getting an error like this:
Uncaught ReferenceError: TOKEN is not defined
at line 0
at throwFetchError (/Users/justin.beckwith/.nvm/versions/node/v16.14.0/lib/node_modules/wrangler/wrangler-dist/cli.js:134316:17)
at fetchResult (/Users/justin.beckwith/.nvm/versions/node/v16.14.0/lib/node_modules/wrangler/wrangler-dist/cli.js:134287:5)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async previewToken (/Users/justin.beckwith/.nvm/versions/node/v16.14.0/lib/node_modules/wrangler/wrangler-dist/cli.js:134658:29)
at async createWorker (/Users/justin.beckwith/.nvm/versions/node/v16.14.0/lib/node_modules/wrangler/wrangler-dist/cli.js:134675:17)
at async start (/Users/justin.beckwith/.nvm/versions/node/v16.14.0/lib/node_modules/wrangler/wrangler-dist/cli.js:136075:16) {
I know the secret is set, and from what I can tell the values should be auto-injected. Any ideas on what I'm missing here? Thank you!

To expand on melops answer above in Typescript:
interface Env {
var_name: var_type
}
export default {
async fetch(request: Request, env: Env, context: ExecutionContext): Promise<Response> {
console.log(env);
},
};

I got the answer from here. You can create the file named .dev.vars, and put the secrets in file. Wrangler 2 will auto binding when you use the command $ wrangler dev
example:
put the secret in .dev.vars files
API_KEY_0 = "YOU_SECRET_0"
API_KEY_1 = "YOU_SECRET_1"
run the command below and you can see this in terminal.
$ wrangler dev
...
Your worker has access to the following bindings:
- Vars:
- API_KEY_0: "(hidden)"
- API_KEY_1: "(hidden)"
...
you code in .ts or .js
interface Env {
API_KEY_0: string
API_KEY_1: string
}
export default {
async fetch(request: Request, env: Env, context: ExecutionContext): Promise<Response> {
console.log(env.API_KEY_0, env.API_KEY_1)
}
}
export default {
async fetch(request, env, context) {
console.log(env.API_KEY_0, env.API_KEY_1)
}
}
for your reference

If you are using Module Workers you should be able to access your secrets via env.
export default {
async fetch(request, env, context) {
return new Response(`TOKEN: ${env.TOKEN}`)
}
}
I am currently using wrangler2 version 0.0.17

Related

Identifier 'module' has already been declared - amplify and nuxt 3

I am getting an error in nuxt3 then setting up this amplify plugin. I am trying to add auth to nuxt3 via plugins
plugins/amplify.js
import Amplify, {withSSRContext} from 'aws-amplify';
export default defineNuxtPlugin((ctx) => {
const awsConfig = {
Auth: {
region: "ap-south-1",
userPoolId: "ap-south-1_#########",
userPoolWebClientId: "#####################",
authenticationFlowType: "USER_SRP_AUTH",
},
};
Amplify.configure({ ...awsConfig, ssr: true });
if (process.server) {
const { Auth } = withSSRContext(ctx.req);
return {
provide: {
auth: Auth,
},
};
}
return {
provide: {
auth: Auth,
},
};
}
[nuxt] [request error] Identifier 'module' has already been declared
at Loader.moduleStrategy (internal/modules/esm/translators.js:145:18)
at async link (internal/modules/esm/module_job.js:67:21)
Does anyone know what's going on?
Been facing this myself... I don't think its a nuxt problem but rather Vite.
I gave up on running the app on dev mode and just resorted to building the app and launching it. Also, in order to use aws-amplify with vite you need to apply some workarounds:
https://ui.docs.amplify.aws/vue/getting-started/troubleshooting
For the window statement (which only makes sense on the browser) you'll need to wrap that with an if statement. Added this to my plugin file
if (process.client) {
window.global = window;
var exports = {};
}
This will let you build the project and run it with npm run build . Far from ideal but unless someone knows how to fix that issue with dev in vite...
BTW, you can also just switch to webpack builder on nuxt settings and the issue goes away.
// https://v3.nuxtjs.org/api/configuration/nuxt.config
export default defineNuxtConfig({
builder: "webpack"
});
I think this might be a problem with auto imports of nuxt.
I added a ~/composables/useBucket.ts file which I used in ~/api. Same error started popping up the next day. After I moved ~/composables/useBucket.ts to ~/composablesServer/useBucket.ts issue disappeared.

Add custom headers to Next.js responses based on environment

I'm trying to add the the X-Robots-Tag header to all Next.js HTTP responses based on something in the environment the server is deployed to -- whether that is an environment variable (my preference) or anything else.
My Next.js application is deployed to two environments: an integration testing environment that uses the production Next.js build (NODE_ENV="production") but is connected to non-prod services, and the actual production environment that serves user traffic. I want to add the header only to the integration testing environment.
I've tried adding the header conditionally based on process.env.INTEGRATIONTESTENV in headers() in next.config.js, but any env var like process.env.XYZ seems to be evaluated at build time, not at runtime. For example, this doesn't work, even though the INTEGRATION_TEST_ENV environment variable is set to the string "true" on the server:
headers() {
if (process.env.INTEGRATION_TEST_ENV === "true") {
console.log("This code will never be run. The condition never evaluates to true, despite the runtime env var actually being set to 'true'.")
return [
{
source: "/:path*",
headers: [
{
key: "X-Robots-Tag",
value: "none",
},
],
},
]
}
},
I can't use next.config.js's phases either, since both my integration test and "real production" are running the production build and production server.
A custom server might solve the problem, but it seems like overkill, especially with the loss of automatic static optimization.
Is there any way to add a header based on a runtime environment variable?
A non-getServerSideProps(context) option would be to use a _middleware page:
// pages/_middleware.js
import { NextResponse } from "next/server";
export function middleware() {
const res = NextResponse.next();
// `process.env` evaluated at build time
if (process.env.INTEGRATION_TEST_ENV === "true") {
res.headers.set("X-Robots-Tag", "none");
}
return res;
}
I'm not sure if you're just compiling once then each deployment target gets the same bundle (and so the environment variable would be "baked-in"), but if you can find a workaround for that, this could potentially work.
And the other approach mentioned in the comments earlier:
export default function Home() {
return "Hello, world!";
}
// automatic static optimization no longer applies
export function getServerSideProps(context) {
if (process.env.INTEGRATION_TEST_ENV === "true") {
context.res.setHeader("X-Robots-Tag", "none");
}
return {
props: {},
};
}

Nuxt plugin not available in Vuex's 'this' in Firebase production (ERROR: this.$myPlugin is not a function)

I'm trying to upload my nuxt app to Firebase as cloud function. The problem is that in the nuxtServerInit action I'm trying to call a plugin function, which apparently is not yet defined at that moment, because an error is thrown: (ERROR: this.$myPlugin is not a function). The code works in dev mode, it's just after upload to Firebase it fails.
The setup is as follows:
myPlugin.js
let env, auth, app, $store;
export default (context, inject) => {
env = context.app.context.env;
auth = context.app.$fire.auth;
app = context.app;
$store = context.store;
inject('myPlugin', myPlugin);
};
async function myPlugin(...) {... }
nuxt.config.js
plugins: [
{ src: '~/plugins/myPlugin', mode: 'all' }, // with no mode specified it fails too
],
vuex index.js
export const actions = {
async nuxtServerInit({ dispatch, commit }, { req }) {
const tl = await dispatch("initAction");
return tl;
}
}
vuex someModule.js
const actions = {
initAction({ commit }) {
return this.$myPlugin(...).then(...) // this line throws '$myPlugin is not a function' error
}
}
What can be the reason for the different behaviour in dev and in prod modes and how could I fix the problem?
UPDATE:
After further testing I established that the problem is not caused by the nuxtServerInit timing. I moved the call of the initAction from nuxtServerInit to a page's created hook. However the same error appears: this.$query is not a function.
The problem occured, because js files were not getting fully loaded due to CORB errors caused by incorrect configuration. Details described in this question.

Get which program is generated in meteor by babel

How can I know in babel plugin whether files currently transpiled by babel are transpiled for server or client/browser package?
Meteor recently implemented option caller available in babel 7. To use it and access information in plugin, one can access Babel.caller poperty like this:
let caller;
module.exports = function(Babel) {
Babel.caller(function(c) {
caller = { ...c };
});
return {
visitor: {
BlockStatement(){
console.log(caller); // logs e.g. {name: "meteor", arch: "web.browser.legacy"}
}
}
};
};

Log 'jsonPayload' in Firebase Cloud Functions

TL;DR;
Does anyone know if it's possible to use console.log in a Firebase/Google Cloud Function to log entries to Stack Driver using the jsonPayload property so my logs are searchable (currently anything I pass to console.log gets stringified into textPayload).
I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.
I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '#google-cloud/logging-bunyan' so my logs go to Stack Driver.
Aside: Using this configuration in Google Cloud Functions is causing issues with Error: Endpoint read failed which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.
So now I'm trying to log using console.log(arg) where arg is an object, not a string. I want this object to appear in Stack Driver under the jsonPayload but it's being stringified and put into the textPayload field.
It took me awhile, but I finally came across this example in firebase functions samples repository. In the end I settled on something a bit like this:
const Logging = require('#google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
resource: {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME ,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
},
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)
You can add a string severity property value to logMetaData (e.g. "INFO" or "ERROR"). Here is the list of possible values.
Update for available node 10 env vars. These seem to do the trick:
labels: {
function_name: process.env.FUNCTION_TARGET,
project: process.env.GCP_PROJECT,
region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}
UPDATE: Looks like for Node 10 runtimes they want you to set env values explicitly during deploy. I guess there has been a grace period in place because my deployed functions are still working.
I ran into the same problem, and as stated by comments on #wtk's answer, I would like to add replicating all of the default cloud function logging behavior I could find in the snippet below, including execution_id.
At least for using Cloud Functions with the HTTP Trigger option the following produced correct logs for me. I have not tested for Firebase Cloud Functions
// global
const { Logging } = require("#google-cloud/logging");
const logging = new Logging();
const Log = logging.log("cloudfunctions.googleapis.com%2Fcloud-functions");
const LogMetadata = {
severity: "INFO",
type: "cloud_function",
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
}
};
// per request
const data = { foo: "bar" };
const traceId = req.get("x-cloud-trace-context").split("/")[0];
const metadata = {
...LogMetadata,
severity: 'INFO',
trace: `projects/${process.env.GCLOUD_PROJECT}/traces/${traceId}`,
labels: {
execution_id: req.get("function-execution-id")
}
};
Log.write(Log.entry(metadata, data));
The github link in #wtk's answer should be updated to:
https://github.com/firebase/functions-samples/blob/2f678fb933e416fed9be93e290ae79f5ea463a2b/stripe/functions/index.js#L103
As it refers to the repository as of when the question was answered, and has the following function in it:
// To keep on top of errors, we should raise a verbose error report with Stackdriver rather
// than simply relying on console.error. This will calculate users affected + send you email
// alerts, if you've opted into receiving them.
// [START reporterror]
function reportError(err, context = {}) {
// This is the name of the StackDriver log stream that will receive the log
// entry. This name can be any valid log stream name, but must contain "err"
// in order for the error to be picked up by StackDriver Error Reporting.
const logName = 'errors';
const log = logging.log(logName);
// https://cloud.google.com/logging/docs/api/ref_v2beta1/rest/v2beta1/MonitoredResource
const metadata = {
resource: {
type: 'cloud_function',
labels: {function_name: process.env.FUNCTION_NAME},
},
};
// https://cloud.google.com/error-reporting/reference/rest/v1beta1/ErrorEvent
const errorEvent = {
message: err.stack,
serviceContext: {
service: process.env.FUNCTION_NAME,
resourceType: 'cloud_function',
},
context: context,
};
// Write the error log entry
return new Promise((resolve, reject) => {
log.write(log.entry(metadata, errorEvent), (error) => {
if (error) {
return reject(error);
}
resolve();
});
});
}
// [END reporterror]

Resources