I'm trying to migrate a Next.js project running on Vercel from
"pino-datadog": "2.0.2",
"pino-multi-stream": "6.0.0",
to
"pino": "8.4.2",
"pino-datadog-transport": "1.2.2",
and I copy the setup from the pino-datadog-transport's README.md:
import { LoggerOptions, pino } from 'pino'
const pinoConf: LoggerOptions = {
level: 'trace',
}
const logger = pino(
pinoConf,
pino.transport({
target: 'pino-datadog-transport',
options: {
ddClientConf: {
authMethods: {
apiKeyAuth: process.env.DATADOG_API_KEY,
},
},
ddServerConf: {
site: 'datadoghq.eu',
},
service: process.env.VERCEL_URL
ddsource: 'nodejs',
},
}),
)
and this seems to be working fine locally, but when I publish it on Vercel and run it there I get the following error:
ERROR Error: unable to determine transport target for "pino-datadog-transport"
at fixTarget (/var/task/node_modules/pino/lib/transport.js:136:13)
at Function.transport (/var/task/node_modules/pino/lib/transport.js:110:22)
Am I missing some additional config to get this working? Anyone else running this setup or something similar to get explicit logs working on Vercel with Next.js?
I have enabled the Datadog integration in Vercel as well, but that only forwards Next.js logs, not explicit console.logs or standard Pino logs from what I can tell.
The solution to this problem is to import even though nothing in the import is actually used in the code.
It seems Next.js strips away all code that isn't imported when the code is deployed.
So, adding
import 'pino-datadog-transport'
at the top of the file solves the problem.
Related
I am not able to get the HMR with "wp-scripts start --hot" running. I tried this in several project, including rather empty projects.
As soon as I add the --hot flag to my npm script, I can still get the script running but I get this error message in my console:
ReactRefreshEntry.js:17 Uncaught TypeError: Cannot read properties of undefined (reading 'injectIntoGlobalHook')
at ./node_modules/#pmmmwh/react-refresh-webpack-plugin/client/ReactRefreshEntry.js (ReactRefreshEntry.js:17:1)
at options.factory (react refresh:6:1)
at __webpack_require__ (bootstrap:24:1)
at startup:4:66
at __webpack_require__.O (chunk loaded:25:1)
at startup:9:1
at startup:9:1
It doesn't even load my Javascript as it seems to break at an earlier point.
I already changed Node Versions back and forth, deleted all node-modules and the package-lock.json, took out all my Javascript to see if this solves anything, but it doesn't.
My setup:
Local by Flyhweel as Wordpress local environment
Wordpress: 6.0.3
Node V 16.18.0
npm 8.19.2
#wordpress/scripts Version: 24.4.0
wp-config.php:
define('WP_DEBUG', false); define('SCRIPT_DEBUG', true);
Also I got the Gutenberg Plugin installed and activated as stated here: https://developer.wordpress.org/block-editor/reference-guides/packages/packages-scripts/#:~:text=%2D%2Dhot%20%E2%80%93%20enables%20%E2%80%9CFast%20Refresh%E2%80%9D.%20The%20page%20will%20automatically%20reload%20if%20you%20make%20changes%20to%20the%20code.%20For%20now%2C%20it%20requires%20that%20WordPress%20has%20the%20SCRIPT_DEBUG%20flag%20enabled%20and%20the%20Gutenberg%20plugin%20installed.
Does anyone else experience this bug or can anyone help with this?
Many thanks and cheers
Johannes
Alternative
I could not get react-refresh-webpack-plugin to work in #worpress/script so ended up using BrowserSyncPlugin, here is how i extended the #wordpress/script webpack config. Hope it helps someone
/**
* External Dependencies
*/
const path = require('path');
const BrowserSyncPlugin = require('browser-sync-webpack-plugin');
/**
* WordPress Dependencies
*/
const defaultConfig = require('#wordpress/scripts/config/webpack.config.js');
module.exports = {
...defaultConfig,
...{
entry: {
theme: path.resolve(process.cwd(), 'src', 'theme.js'),
},
plugins: [
...defaultConfig.plugins,
new BrowserSyncPlugin({
// inspect docker network to get this container external ip
// then specify it in host
host: '172.44.0.3', //docker container host
port: 8887,
//server: { baseDir: ['public'] },
proxy: 'http://172.44.0.1:7020/', //docker container host
open: false
}),
]
}
}
and yarn run start
Disclaimer: Am developing in docker env. i know its not answer to the question, but if you are stuck, kindly try it out.
import nextJest from 'next/jest'
const createJestConfig = nextJest({
dir: './',
})
// Add any custom config to be passed to Jest
const customJestConfig = {
setupFilesAfterEnv: ['<rootDir>/setupTests.js'],
moduleDirectories: ['node_modules', '<rootDir>/'],
testEnvironment: 'jest-environment-jsdom',
modulePathIgnorePatterns: ['./cypress'],
// testMatch: ['<rootDir>/**/*.test.js', '<rootDir>/**/*.test.jsx'],
}
module.exports = createJestConfig(customJestConfig)
In my project, we use Nextjs application with both Cypress and Jest. The latest jest.config.ts which is recommended is shown above.
If you are now owned this problem. you can maybe try to check your modulePathIgnorePatterns.
I added a ./ to ['cypress'], then it works well. So, I think maybe it just cann't recognize the path.
I'm running into a weird situation here that debugging is becoming extremely hard to debug.
I have made a post here https://forum.cleavr.io/t/cloudflare-caching-in-a-quasar-app/844 thinking that the problem was about caching.
We are having hydration errors in our webapp codotto.com ONLY after a few hours (or minutes depending on website traffic). As soon as I redeploy the app, everything works well. No hydration errors anymore.
We had the idea that it was caching but we have disabled caching completely in our Cloudflare dashboard:
Then we can verify that the cache is not being used:
Note the CF-Cache-Status set to DYNAMIC (refer to here to see that when you have DYNAMIC set in CF-Cache-Status header it means that cache is not being used).
Locally the app works great and we are not able to reproduce the issue locally and in staging as well. This only happens in production. Here we have configurations for pm2 settings:
production
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxx/codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 7145,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
staging
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "staging.codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxxx/staging.codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 9892,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
We are running out of ideas and this only happens in production, making it extremely hard to debug and we are scoping the problem down to the server configuration.
We understand that this question might be tagged as off-topic since it's not too specific but what I'm looking for are some ideas on things that might help debug this issue. The webapp is done using Quasar framework in SSR mode and we are using https://cleavr.io/ to deploy our application.
We have tried following this guide on Quasar's website to debug hydration errors but haven't gotten us anywhere.
In case you would like to reproduce the bug, you will need to sign up to an account in codotto.com, then visit https://codotto.com so that you are redirected to the dashboard instead of the landing page.
Can anyone here help or explain why we have these hydration errors?
The problem was not related to caching or any other issues we thought it was related.
In one of our bootfiles we had the following:
export default boot(({ router }) => {
router.beforeEach((to, from, next) => {
const requiresAuthentication = to.matched.some(
(record) => record.meta.needsAuthentication
);
const appStore = useAppStore();
if (requiresAuthentication && !appStore.isLoggedIn) {
next({ name: 'auth-login' });
} else {
const onlyGuestCanSee = to.matched.some(
(record) => record.meta.onlyGuestCanSee
);
if (onlyGuestCanSee && appStore.isLoggedIn) {
next({ name: 'dashboard' });
} else {
next();
}
}
});
});
In this file we didn't pass the store to useAppStore causing the request pollution and, consequently, hydration errors. The fix was to pass store to useAppStore:
export default boot(({ router, store }) => {
const appStore = useAppStore(store);
...
})
When I test my React Native app that un my firebase config file
const auth = initializeAuth(app, {
persistence: getReactNativePersistence(AsyncStorage),
});
I get the error message: Cannot find module '#firebase/auth/react-native' from 'node_modules/firebase/auth/react-native/dist/index.cjs.js'
The app works fine in runtime, but doesn't seem to work during tests.
Any ideas to what this error could be caused by?
I was having the same issue this morning. Did you get it resolved?
I found that creating a file in the mocks directory called #firebase/auth/react-native.js and exporting a named function getReactNativePersistence resolved the issue (for now) for me. It's just a dummy function at this point, but my tests run.
__mocks__/#firebase/auth/react-native.js
export const getReactNativePersistence = () => {
return;
};
My Main goal is to create an Electron App (Windows) that locally stores data in an SQLite Database. And because of type safety I choose to use the Prisma framework instead of other SQLite Frameworks.
I took this Electron Sample Project and now try to include Prisma. Depending on what I try different problems do arrise.
1. PrismaClient is unable to be run in the Browser
I executed npx prisma generate and then try to execute this function via a button:
import { PrismaClient } from '#prisma/client';
onSqlTestAction(): void {
const prisma = new PrismaClient();
const newTestObject = prisma.testTable.create(
{
data: {
value: "TestValue"
}
}
);
}
When executing this in Electron I get this:
core.js:6456 ERROR Error: PrismaClient is unable to be run in the browser.
In case this error is unexpected for you, please report it in https://github.com/prisma/prisma/issues
at new PrismaClient (index-browser.js:93)
at HomeComponent.onSqlTestAction (home.component.ts:19)
at HomeComponent_Template_button_click_7_listener (template.html:7)
at executeListenerWithErrorHandling (core.js:15281)
at wrapListenerIn_markDirtyAndPreventDefault (core.js:15319)
at HTMLButtonElement.<anonymous> (platform-browser.js:568)
at ZoneDelegate.invokeTask (zone.js:406)
at Object.onInvokeTask (core.js:28666)
at ZoneDelegate.invokeTask (zone.js:405)
at Zone.runTask (zone.js:178)
It somehow seems logical that Prisma cannot run in a browser. But I actually build a native app - with Electron that embeds a Browser. It seems to be a loophole.
2. BREAKING CHANGE: webpack < 5 used to include polyfills
So i found this Question: How to use Prisma with Electron
Seemed to be exactly what I looked for. But the error message is different (Debian binaries were not found).
The solution provided is to generate the prisma artifacts into the src folder instead of node_modules - and this leads to 19 polyfills errors. One for example:
./src/database/generated/index.js:20:11-26 - Error: Module not found: Error: Can't resolve 'path' in '[PATH_TO_MY_PROJECT]\src\database\generated'
BREAKING CHANGE: webpack < 5 used to include polyfills for node.js core modules by default.
This is no longer the case. Verify if you need this module and configure a polyfill for it.
If you want to include a polyfill, you need to:
- add a fallback 'resolve.fallback: { "path": require.resolve("path-browserify") }'
- install 'path-browserify'
If you don't want to include a polyfill, you can use an empty module like this:
resolve.fallback: { "path": false }
And this repeats with 18 other modules. Since the error message to begin with was different I also doubt that this is the way to go.
I finally figured this out. What I needed to understand was, that all Electron apps consist of 2 parts: The Frontend Webapp (running in embedded Chromium) and a Node backend server. Those 2 parts are called IPC Main and IPC Renderer and they can communicate with each other. And since Prisma can only run on the main process which is the backend I had to send my SQL actions to the Electron backend and execute them there.
My minimal example
In the frontend (I use Angular)
// This refers to the node_modules folder of the Electron Backend, the folder where the main.ts file is located.
// I just use this import so that I can use the prisma generated classes for type safety.
import { TestTable } from '../../../app/node_modules/.prisma/client';
// Button action
onSqlTestAction(): void {
this.electronService.ipcRenderer.invoke("prisma-channel", 'Test input').then((value) => {
const testObject: TestTable = JSON.parse(value);
console.log(testObject);
});
The sample project I used already had this service to provide the IPC Renderer:
#Injectable({
providedIn: 'root'
})
export class ElectronService {
ipcRenderer: typeof ipcRenderer;
webFrame: typeof webFrame;
remote: typeof remote;
childProcess: typeof childProcess;
fs: typeof fs;
get isElectron(): boolean {
return !!(window && window.process && window.process.type);
}
constructor() {
// Conditional imports
if (this.isElectron) {
this.ipcRenderer = window.require('electron').ipcRenderer;
this.webFrame = window.require('electron').webFrame;
this.childProcess = window.require('child_process');
this.fs = window.require('fs');
// If you want to use a NodeJS 3rd party deps in Renderer process (like #electron/remote),
// it must be declared in dependencies of both package.json (in root and app folders)
// If you want to use remote object in renderer process, please set enableRemoteModule to true in main.ts
this.remote = window.require('#electron/remote');
}
}
And then in the Electron backend I first added "#prisma/client": "^3.0.1" to the package.json (for the Electron backend not the frontend). Then I added to the main.ts this function to handle the requests from the renderer:
// main.ts
ipcMain.handle("prisma-channel", async (event, args) => {
const prisma = new PrismaClient();
await prisma.testTable.create(
{
data: {
value: args
}
}
);
const readValue = await prisma.testTable.findMany();
return JSON.stringify(readValue);
})
This way of simply adding the IPC Main handler in the main.ts file of course is a big code smell but usefull as minimal example. I think I will move on with the achitecture concept presented in this article.